Tuesday, March 5, 2013

HBase : A HDFS Database

HBase: A Simple Introduction
HBase: Features
  • Strongly consistent reads/writes: HBase is not an "eventually consistent" Data Store. This makes it very suitable for tasks such as high-speed counter aggregation.
  • Automatic sharding: HBase tables are distributed on the cluster via regions, and regions are automatically split and re-distributed as your data grows.
  • Automatic Region Server failover
  • Hadoop/HDFS Integration: HBase supports HDFS out of the box as its distributed file system.
  • MapReduce: HBase supports massively parallelized processing via MapReduce for using HBase as both source and sink.
  • Java Client API: HBase supports an easy to use Java API for programmatic access.
  • Thrift/REST API: HBase also supports Thrift and REST for non-Java front-ends.
  • Block Cache and Bloom Filters: HBase supports a Block Cache and Bloom Filters for high volume query optimization.
  • Operational Management: HBase provides build-in web-pages for operational insight as well as JMX metrics.
HBase: Data Model
HBase: Architecture
                                                                HBase: Architecture
1.      Master
2.     Region Servers
Table       (HBase table)
Region      (Regions for the table)
Store       (Store per Column Family for each Region for the table)
MemStore    (MemStore for each Store for each Region for the table)
StoreFile   (StoreFiles for each Store for each Region for the table)
Block       (Blocks within a StoreFile within a Store for each Region for the table)


        ·  StoreFile (HFile):
StoreFiles are where your data lives.
        $ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile

         ·  MemStore:
       The MemStore holds in-memory modifications to the Store. Modifications are Key-Values. When asked to flush, current MemStore is moved to snapshot and is cleared. HBase continues to serve edits out of new MemStore and backing snapshot until flusher reports in that the flush succeeded. At this point the snapshot is let go.

HBase: Bulk Loading
Bulk Load Architecture
2.      Completing the data load
When Would I Use Apache HBase?
What Is The Difference Between HBase and Hadoop/HDFS?
References:

Apache HBase is the Hadoop database, a distributed, scalable, big data store.



HBase is a type of "NoSQL" database. "NoSQL" is a general term meaning that the database isn't an RDBMS which supports SQL as its primary access language, but there are many types of NoSQL databases: Berkeley DB is an example of a local NoSQL database, whereas HBase is very much a distributed database. Technically speaking, HBase is really more a "Data Store" than "Data Base" because it lacks many of the features you find in an RDBMS, such as typed columns, secondary indexes, triggers, and advanced query languages, etc.


(Table, Rowkey, Family, column, Timestamp) a Value
Think of tags, values are of any length, no predefined names/types or widths, and column names are carry information about just like tags.
 (Table, Rowkey, Family, column, Timestamp) a Value

(Table, Rowkey, Family, column, Timestamp) a Value

HMaster is the implementation of the Master Server. The Master server is responsible for monitoring all RegionServer instances in the cluster, and is the interface for all metadata changes. In a distributed cluster, the Master typically runs on  “NameNode”. If run in a multi-Master environment, all Masters compete to run the cluster. If the active Master loses its lease in ZooKeeper (or the Master shuts down), then then the remaining Masters jostle to take over the Master role.
RegionServer is the RegionServer implementation. It is responsible for serving and managing regions. In a distributed cluster, a RegionServer runs on a  “DataNode” Regions are the basic element of availability and distribution for tables, and are comprised of a Store per Column Family. The hierarchy of objects is as follows:
     ·  Write Ahead Log (WAL):
Each RegionServer adds updates (Puts, Deletes) to its write-ahead log (WAL) first, and then to the  “MemStore” for the affected  “Store”. This ensures that HBase has durable writes. Without WAL, there is the possibility of data loss in the case of a RegionServer failure before each MemStore is flushed and new StoreFiles are written. HLog is the HBase WAL implementation, and there is one HLog instance per RegionServer. The WAL is in HDFS in /hbase/.logs/ with subdirectories per region.

HBase includes several methods of loading data into tables. The most straightforward method is to either use the TableOutputFormat class from a MapReduce job, or use the normal client APIs; however, these are not always the most efficient methods. The bulk load feature uses a MapReduce job to output table data in HBase's internal data format, and then directly loads the generated StoreFiles into a running cluster. Using bulk load will use less CPU and network resources than simply using the HBase API.

The HBase bulk load process consists of two main steps.
1.      Preparing data via a MapReduce job
The first step of a bulk load is to generate HBase data files (StoreFiles) from a MapReduce job using HFileOutputFormat. This output format writes out data in HBase's internal storage format so that they can be later loaded very efficiently into the cluster.
In order to function efficiently, HFileOutputFormat must be configured such that each output HFile fits within a single region. In order to do this, jobs whose output will be bulk loaded into HBase use Hadoop's TotalOrderPartitioner class to partition the map output into disjoint ranges of the key space, corresponding to the key ranges of the regions in the table.

After the data has been prepared using HFileOutputFormat, it is loaded into the cluster using completebulkload. This command line tool iterates through the prepared data files, and for each one determines the region the file belongs to. It then contacts the appropriate Region Server which adopts the HFile, moving it into its storage directory and making the data available to clients.

Use Apache HBase when you need random, real-time read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Apache HBase is an open-source, distributed, versioned, column-oriented store modelled after Google's Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.

HDFS is a distributed file system that is well suited for the storage of large files. It's documentation states that it is not, however, a general purpose file system, and does not provide fast individual record lookups in files. HBase, on the other hand, is built on top of HDFS and provides fast record lookups (and updates) for large tables. This can sometimes be a point of conceptual confusion. HBase internally puts your data in indexed "StoreFiles" that exist on HDFS for high-speed lookups. See the Chapter 5, Data Model and the rest of this chapter for more information on how HBase achieves its goals.


Followers