GlusterFS

GlusterFS
Original author(s) Gluster
Developer(s) Red Hat, Inc.
Stable release
3.8.4 / September 15, 2016 (2016-09-15)
Preview release
3.7.8[1] / February 10, 2016 (2016-02-10)
Operating system Linux, OS X, FreeBSD, NetBSD, OpenSolaris
Type Distributed file system
License GNU General Public License v3[2]
Website www.gluster.org

GlusterFS is a scale-out network-attached storage file system. It has found applications including cloud computing, streaming media services, and content delivery networks. GlusterFS was developed originally by Gluster, Inc. and then by Red Hat, Inc., as a result of Red Hat acquiring Gluster in 2011.[3]

In June 2012, Red Hat Storage Server was announced as a commercially supported integration of GlusterFS with Red Hat Enterprise Linux.[4] Red Hat bought Inktank Storage in April 2014, which is the company behind the Ceph distributed file system, and re-branded GlusterFS-based Red Hat Storage Server to "Red Hat Gluster Storage".[5]

Design

GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system. It is free software, with some parts licensed under the GNU General Public License (GPL) v3 while others are dual licensed under either GPL v2 or the Lesser General Public License (LGPL) v3. GlusterFS is based on a stackable user space design.

GlusterFS has a client and server component. Servers are typically deployed as storage bricks, with each server running a glusterfsd daemon to export a local file system as a volume. The glusterfs client process, which connects to servers with a custom protocol over TCP/IP, InfiniBand or Sockets Direct Protocol, creates composite virtual volumes from multiple remote servers using stackable translators. By default, files are stored whole, but striping of files across multiple remote volumes is also supported. The final volume may then be mounted by the client host using its own native protocol via the FUSE mechanism, using NFS v3 protocol using a built-in server translator, or accessed via gfapi client library. Native-protocol mounts may then be re-exported e.g. via the kernel NFSv4 server, SAMBA, or the object-based OpenStack Storage (Swift) protocol using the "UFO" (Unified File and Object) translator.

Most of the functionality of GlusterFS is implemented as translators, including file-based mirroring and replication, file-based striping, file-based load balancing, volume failover, scheduling and disk caching, storage quotas, and volume snapshots with user serviceability (since GlusterFS version 3.6).

The GlusterFS server is intentionally kept simple: it exports an existing directory as-is, leaving it up to client-side translators to structure the store. The clients themselves are stateless, do not communicate with each other, and are expected to have translator configurations consistent with each other. GlusterFS relies on an elastic hashing algorithm, rather than using either a centralized or distributed metadata model. With version 3.1 and later of GlusterFS, volumes can be added, deleted, or migrated dynamically, helping to avoid configuration coherency problems, and allowing GlusterFS to scale up to several petabytes on commodity hardware by avoiding bottlenecks that normally affect more tightly coupled distributed file systems.

GlusterFS has been used as the foundation for academic research[6][7] and a survey article.[8]

Red Hat markets the software for three markets: "on-premises", public cloud and "private cloud".[9]

See also

References

  1. "glusterfs-3.7.8 released". gluster.org. 10 Feb 2016. Retrieved 10 Feb 2016.
  2. "Gluster 3.1: Understanding the GlusterFS License". Gluster Documentation. Gluster.org. Retrieved 2014-04-30.
  3. Timothy Prickett Morgan (4 Oct 2011). "Red Hat snatches storage Gluster file system for $136m". The Register. Retrieved 3 July 2016.
  4. Timothy Prickett Morgan (June 27, 2012). "Red Hat Storage Server NAS takes on Lustre, NetApp". The Register. Retrieved May 30, 2013.
  5. "Red Hat Storage. New product names. Same great features.". redhat.com. Mar 20, 2015. Retrieved 2015-03-20.
  6. Noronha, Ranjit; Panda, Dhabaleswar K (9–12 September 2008). IMCa: A High Performance Caching Front-End for GlusterFS on InfiniBand (PDF). 37th International Conference on Parallel Processing, 2008. ICPP '08. IEEE. doi:10.1109/ICPP.2008.84. Retrieved 14 June 2011.
  7. Kwidama, Sevickson (2007–2008), Streaming and storing CineGrid data: A study on optimization methods (PDF), University of Amsterdam System and Network Engineering, retrieved 10 June 2011
  8. Klaver, Jeroen; van der Jagt, Roel (14 July 2010), Distributed file system on the SURFnet network Report (PDF), University of Amsterdam System and Network Engineering, retrieved 9 June 2012
  9. "Red Hat Storage Server". Web site. Red Hat. Retrieved May 30, 2013.

External links

This article is issued from Wikipedia - version of the 10/3/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.