Tuesday, Oct 21 2014

Large-Scale Hadoop Installations are the New Norm

Written by

 

Large-Scale Hadoop Installations are the New Norm

What web-based company has the world’s largest Hadoop cluster? Surprisingly, it’s not Google, Facebook, or even Twitter — it’s Yahoo!, with 455 petabytes of data stored on over 100,000 CPUs in more than 40,000 servers. The company’s biggest Hadoop cluster, at around 4,500 nodes, is around four times the size of Facebook’s largest cluster.

Hadoop is a hot topic in today’s tech world, especially when it comes to Big Data. As more organizations work toward mining and implementing Big Data strategies, the use of Hadoop on a larger scale is set to become the new standard for practical, results-driven applications of data mining.

What is Hadoop, and why does it matter?

At the most basic definition, Hadoop is a free, open source software library that makes useful, cost-effective processing of Big Data possible. The Hadoop library, developed by the Apache Software Foundation, is built on underlying technology that was invented by Google to index the massive amounts of data collected by the search engine and transform it into relevant results for searchers.

Hadoop consists of four modules — Hadoop Common, Hadoop Distributed File System (HDFS), Hadoop YARN, and Hadoop MapReduce — and includes several compatible add-ons such as programming languages and databases, which enhance the real-world applications of the library.

Providing scale and flexibility for large data projects, on a basis that’s affordable for both enterprise and small business, makes Hadoop an attractive solution with endless potential.

The appeal of Hadoop

As Yahoo! has come to realize, Hadoop provides a wide range of flexible, scalable capabilities and vastly increased potential for the real application of Big Data. In most large organizations today, data is siloed — stored and worked with in separate systems with little to no cross-functionality. Large-scale Hadoop installations make it possible for organizations to share data quickly, easily and effectively, with strong security measures still in place to prevent data breaches and malware attacks.

With an organization’s data stored collectively, Hadoop installations can then run YARN to manage data ecosystems. Hadoop YARN is a framework that provides job scheduling and cluster resource management, enabling the system to spread resources out sufficiently across multiple machines and deliver increased flexibility. The YARN framework also maintains redundancy to guard against data loss and system failure.

With YARN, engineers and developers can work immediately on small clusters within a larger deployment, and collaborate with others without sacrificing security.

Combining Hadoop with other systems

Within Hadoop, there are several distinct systems that can be operated independently, but still remain part of the larger ecosystem. This includes elements such as Hbase, the non-relational distributed database for Hadoop; Pig, a high-level platform for large-set data analysis; and Hive, a data warehouse infrastructure.

Hadoop has the capabilities to handle large swaths of an organization’s data needs, but depending on the individual company, other systems may be used to supplement a Hadoop installation — and the library integrates well with popular enterprise systems. For example, Yahoo! employs other systems for email serving, and photo serving in Flickr, but stores copied data from these systems in Hadoop.

The rise of Big Data and the need for efficient, cost-effective analytics has paved the way for Hadoop to become standard in organizations of all sizes. To find out if your organization should be undergoing a Hadoop installation, contact the IT experts at The Armada Group.

armada linkedin v01