AxeorHome

Hadoop Eco-System

We  have expertise in the Open Source Hadoop™ Distributed File System (HDFS), which deals with Big Data challenges quickly and efficiently. HDFS is a robust solution that performs effectively even on commodity hardware with minimal resources and ensures high availability and reliability. This enables you to optimally utilize existing resources cost effectively.

The key areas we focus on when dealing with Big Data are:

  • Distributed and Parallel computing
  • Automatic load balancing
  • Scalability and High Failover
  • Data replication and Robustness
  • Network and disk-transfer optimization

 

Hive, built on top of Hadoop™, is an SQL-based data warehousing system, which fulfills the need of data schema agility and query language flexibility. Hive provides a more optimized, extensible and low-cost way of querying structured data stored in HDFS.
Pig being a platform for analyzing Big Data, has been used by Impetus for writing automatically optimized code with substantial parallelization, to accomplish data analysis tasks.

Extended Components

Flume, Chukwa, and Scribe provides high performance log aggregation from a large number of servers due to the level of scalability, extensibility and failover they provide without client-side modification.
Apache Mahout has a rich set of distributed scalable machine libraries that extracts insights from the Big Data. Mahout’s core algorithms for clustering, classification and batch-based collaborative filtering are implemented on top of Apache Hadoop™ using the Map Reduce paradigm.
Apache Sqoop and HiHo helps to import and export data between traditional RDBMS databases and HDFS.
Oozie deals with workflow coordination for data processing in order to resolve the dependencies between the Map-reduce jobs.