Granular configuration automation

Granular configuration automation (GCA) is a specialized area in the field of configuration management which focuses on visibility and control of an IT environment’s configuration and bill-of-material at the most granular level. This framework focuses on improving the stability of IT environments by analyzing granular information. It responds to the requirement to determine a threat level of an environment risk, and to allow IT organizations to focus on those risks with the highest impact on performance.[1] Granular Configuration Automation combines two major trends in configuration management: the move to collect detailed and comprehensive environment information and the growing utilization of automation tools.[2]

Driving factors

For IT personnel, IT systems have grown in complexity,[3] supporting a wider and growing range of technologies and platforms. Application release schedules are accelerating, requiring greater attention to more information.[4] The average Global 2000 firm has more than a thousand applications that their IT organization deploys and supports.[5] New technology platforms like cloud and virtualization offer benefits to IT with less server space, and energy savings, but complicate configuration management from issues like sprawl.[6] The need to ensure high availability and consistent delivery of business services have led many companies to develop automated configuration, change and release management processes.[7]

Downtime and system outages undermine the environments that IT professionals manage. Despite advances in infrastructure robustness, occasional hardware, software and database downtime occurs. Dun & Bradstreet reports that 49% of Fortune 500 companies experience at least 1.6 hours of downtime per week, translating into more than 80 hours annually.[8] The growing costs of downtime has provided IT organizations with ample evidence for the need to improve processes. A conservative estimate from Gartner pegs the hourly cost of downtime for computer networks at $42,000, so a company that suffers from worse than average downtime of 175 hours a year can lose more than $7 million per year.[9]

The demands and complexity of incident investigation have put further strain on IT professionals, where their current experience cannot address incidents to the scale of environments in their organizations. The incident may be captured, monitored and the results reported using standardized forms, most of the time even using a help-desk or trouble tickets software system to automate it and sometimes even a formal process methodology like ITIL. But the core activity is still handled by a technical specialist “nosing around” the system trying to “figure out” what is wrong based on previous experience and personal expertise.[10]

Potential applications

See also

References

  1. Risk Management Broken in Many Organizations, says Gartner, Government Technology, ""
  2. Ken Jackson, The Dawning of the IT Automation Era, IT Business Edge.
  3. Bob Violino, Reducing IT Complexity, Smart Enterprise.
  4. Change, Configuration, and Release: What’s Really Driving Top Performance?, IT Process Institute.
  5. Improving Application Quality by Controlling Application Infrastructure, Configuration Management Crossroads.
  6. Cameron Sturdevant, How to Tame Virtualization Sprawl, eweek.
  7. Challenges and Priorities for Fortune 1000 Companies.
  8. How Much Does Downtime Really Cost?, Information Management.
  9. How to quantify downtime, NetworkWorld.
  10. Root Cause Analysis for IT Incidents Investigation, IT Toolbox.
This article is issued from Wikipedia - version of the 10/25/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.