nCUBE

This article is about the nCUBE parallel computer company and its computers. For other uses, see NCUBE (disambiguation).

nCUBE was a series of parallel computing computers from the company of the same name. Early generations of the hardware used a custom microprocessor. With its final generations of servers, nCUBE no longer designed custom microprocessors for machines, but used server class chips manufactured by a third party in massively parallel hardware deployments, primarily for the purposes of on-demand video.

Company history

Industry Computer industry
Founded 1983
Headquarters Beaverton, Oregon, United States
Products Computers

nCUBE was founded in 1983 in Beaverton, Oregon by a group of Intel employees frustrated by Intel's reluctance to enter the parallel computing market, though Intel released their iPSC/1 in the same year as the first nCUBE was released. In December 1985, the first generation of nCUBE's hypercube machines were released. The second generation (N2) was launched in June 1989. The third generation (N3) was released in 1995. The fourth generation (N4) was released in 1999.

In 1988, Larry Ellison invested heavily into nCUBE and became the company's majority share holder. The company's headquarters was relocated to Foster City, California to be closer to the Oracle Corporation. In the 1990s, nCUBE shifted its focus from the parallel computing market to the Video on demand (VOD) video server market. In 1994, Ronald Dilbeck became chief executive officer and set nCUBE on a fast track to an initial public offering.

In 1996, Ellison downsized nCUBE and Dilbeck departed. Ellison took over as acting CEO and redirected the company to become Oracle's Network Computer division. After the network computer diversion, nCUBE resumed development on video servers. nCUBE deployed its first VOD video server in Burj al-Arab hotel in Dubai.

In 1999, nCUBE announced it was acquiring a seven-year-old Louisville, Colorado software company SkyConnect, Inc., developers of digital advertising and VOD software for cable television and partner in their Burj Al-Arab hotel deployment. The company was once again on IPO fast-track, only to be halted again after the bursting of Dot-com bubble. In 2000, SeaChange International filed a suit against nCUBE, alleging its nCUBE's MediaCube-4 product infringed on a SeaChange patent. A jury upheld the validity of SeaChange's patent and awarded damages. The U.S. Court of Appeals for the Federal Circuit subsequently overturned the ruling on June 29, 2005.

As fallout from the dot-com bubble bursting, the recession, and the lawsuit, in April 2001 nCUBE laid-off 17% of its work force and began closing offices (Foster City in 2002 and Louisville in 2003) to downsize and consolidate the company around the Beaverton manufacturing office. Also in 2001, after acquiring patents from Oracle's interactive television division, nCUBE filed a patent infringement suit against SeaChange claiming that their competitor's video server offering violated its VOD patent on delivery to set-top boxes. nCUBE won the lawsuit and was awarded over $2 million in damages.

Also in 2002, Ellison stepped down from CEO and named Michael J. Pohl, who had been the company's president (and former CEO of SkyConnect) since 1999, as CEO.

In January 2005, nCUBE was acquired by C-COR for approximately $89.5 million.

In December 2007, C-COR was acquired by ARRIS.

Computer models

The first nCUBE machines to be released were the nCUBE 10 of late 1985. It was originally called NCUBE/ten but the name morphed over time. These were based on a set of custom chips, where each compute node had a processor chip with 32-bit ALU, a 64-bit IEEE 754 FPU, and special communication instructions, and 128 kB of RAM. A node delivered 2 MIPS, 500 kiloFLOPS (32-bit single precision), or 300 kiloFLOPS (64-bit double precision). There were 64 nodes per board. The host board, based on an Intel 80286, ran a custom Unix-like operating system called Axis, and each compute node ran a 4 kB kernel, Vertex.[1]

The name referred to the machines ability to build an order-ten hypercube, supporting 1024 CPU's in a single machine. Some of the modules would be used strictly for input/output, which included the nChannel storage-control card, frame buffers, and the InterSystem card that allowed nCUBEs to be attached to each other. At least one host board needed to be installed, acting as the terminal driver. It could also partition the machine into sub-cubes and allocate them separately to different users.

Researchers Robert Benner, John Gustafson and Gary Montry of the Parallel Processing Division of Sandia National Laboratory first won the Karp Prize ($100) and then won the first Gordon Bell Prize in 1987 using the nCUBE 10.[2]

Die of nCUBE 2 processor

For the second series the naming was changed, and they created the single-chip nCUBE 2 processor. This was otherwise similar to the nCUBE 10's CPU, but ran faster at 25 MHz to provide about 7 MIPS and 3.5 megaFLOPS. This was later improved to 30 MHz in the 2S model. RAM was increased as well, with 4 to 16 MB (16 MB) of RAM on a "single wide" 1 in x 3.5 in module, double that on the "double wide" module, and quadruple that on a double wide, double side module. The I/O cards generally had less RAM, with different backend interfaces to support SCSI, HIPPI, etc.

Three single-chip nCUBE 2 processors on a 1" x 3.5" module with memory.
nCUBE 2 circuit board with 64 processors and memory

Each nCUBE-2 CPU also included thirteen I/O channels running at 20 Mbit/s. One of these was dedicated to I/O duties, while the other twelve were used as the interconnect system between CPUs. Each channel used wormhole routing to forward messages along. The machines themselves were wired up as order-twelve hypercubes, allowing for up to 4096 CPUs in a single machine.

Each module ran a 200kB microkernel called nCX, but the system now used a Sun Microsystems workstation as the front end and no longer needed the Host Controller. nCX included a parallel filesystem that could do 96-way striping for high performance. C and C++ languages are available, as is NQS, Linda, and Parasoft's Express. These were supported by an in-house compiler team.

The largest nCUBE-2 system installed was at Sandia National Laboratory, a 1024-CPU system that reached 1.91 gigaFLOPS in testing. In addition the nCX operating system, it also ran the SUNMOS lightweight kernel for research purposes.[3]

The nCUBE-3 CPU included several improvements, and moved to a 64-bit ALU. Among the other improvements was a process-shrink to 0.5u, allowing the speed to be increased to 50 MHz (with plans for 66 and 100 MHz). The CPU was also superscalar and included 16 kB (16 KB) instruction and data caches, and an MMU for virtual memory support.

Additional I/O links were added, with two dedicated to I/O and sixteen for interconnects, allowing for up to 65,536 CPUs in the hypercube. The channels operated at 100 Mbit/s, due to use of 2 bit parallel instead of the serial lines previously The nCUBE3 also added fault-tolerant adaptive routing support, in addition to fixed routing, although in retrospect it's not entirely clear why.

A fully loaded nCUBE-3 machine can use up to 65k processors, for 3 TIPS, and 6.5 teraFLOPS. The maximum memory would be 65 TB, with a network I/O capability of 24 TB/second. Thus, the processor is biased in terms of I/O, which is usually the limitation. The nChannel board provides 16 I/O channels, where each channel can support transfers at 20 MB/s.

A microkernel was developed for the nCUBE-3 machine, but never completed, and abandoned in favor of Transit, an operating system based on Plan9.

The nCUBE-4 marked the transition to commodity processors with each node containing an Intel IA32 server class CPU. The n4 also brought exclusive focus on video streaming rather than scientific applications. Each hub contained one hypercube node, one CPU, a pair of PCI buses, and up to 12 SCSI drives. The n4 was followed by the n4x, the n4x r2, and the n4x r3. These last two were based on the Serverworks chipset rather than the Intel ones. The nCUBE-5 was very similar to the n4 family but incorporated two hypercube nodes in each hub and only supported video streaming over gigabit ethernet.

See also

References

  1. Hayes, J.; Mudge, T.; Stout, Q.; Colley, S. & Palmer, J. (1986). "A microprocessor-based hypercube supercomputer". IEEE Micro. 6 (5): 6–17. doi:10.1109/MM.1986.304707.
  2. http://scicomp.ewha.ac.kr/netlib/benchmark/bell1
  3. Rolf Riesen; Lee Ann Fisk; et al. "SUNMOS?". Retrieved 2006-05-19.—a paper that explains what SUNMOS is (CiteSeer cached copy)

General:

History:

This article is issued from Wikipedia - version of the 11/5/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.