DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

  1. DZone
  2. Refcards
  3. Essential Apache HBase
refcard cover
Refcard #159

Essential Apache HBase

Learn HBase: the NoSQL Database for Hadoop and Big Data

HBase Tutorial - Learn HBase quickly with this beginner's introduction to the Hadoop database: a distributed, scalable Big Data store for managing very large tables.

Download Refcard
Free PDF for Easy Reference
refcard cover

Written By

author avatar Otis Gospodnetic
Founder, Sematext
Table of Contents
► About HBase ► HBase Configuration ► Starting and Stopping Apache HBase ► Using the HBase Shell ► Learn the HBase Java API ► HBase Web UI: Master & Slaves ► HBase Data Model & Schema Design ► Mapreduce with HBase ► HBase Performance Tuning ► Managing and Monitoring HBase ► HBase Troubleshooting & Debugging ► HBase API for Non-Java Languages ► Useful HBase Links & Other Sources
Section 1

About HBase

HBase is the Hadoop database. Think of it as a distributed, scalable Big Data store.

Use HBase when you need random, real-time read/write access to your Big Data. The goal of the HBase project is to host very large tables — billions of rows multiplied by millions of columns — on clusters built with commodity hardware. HBase is an open-source, distributed, versioned, column-oriented store modeled after Google's Bigtable. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.

Section 2

HBase Configuration

OS & Other Pre-requisites

HBase uses the local hostname to self-report its IP address. Both forwardand reverse-DNS resolving should work.

HBase uses many files simultaneously. The default maximum number of allowed open-file descriptors (1024 on most *nix systems) is often insufficient. Increase this setting for any Hbase user.

The nproc setting for a user running HBase also often needs to be increased — when under a load, a low nproc setting can result in the OutOfMemoryError.

Because HBase depends on Hadoop, it bundles an instance of the Hadoop jar under its /lib directory. The bundled jar is ONLY for use in standalone mode. In the distributed mode, it is critical that the version of Hadoop on your cluster matches what is under HBase. If the versions do not match, replace the Hadoop jar in the HBase /lib directory with the Hadoop jar from your cluster.

To increase the maximum number of files HDFS DataNode can serve at one time in hadoop/conf/hdfs-site.xml, just do this:

1
<property>
2
    <name>dfs.datanode.max.xcievers</name>
3
    <value>4096</value>
4
    </property>

hbase-env.sh

You can set HBase environment variables in this file.

Env Variable Description
HBASE_HEAPSIZE Shows the maximum amount of heap to use, in
MB. Default is 1000. It is essential to give HBase
as much memory as you can (avoid swapping!) to
achieve good performance.
HBASE_OPTS Shows extra Java run-time options. You can also
add the following to watch for GC:
export HBASE_OPTS="$HBASE_OPTS -verbose:gc
-XX:+PrintGCDetails -XX:+PrintGCDateStamps
$HBASE_GC_OPTS"

hbase-site.xml

Specific customizations go into this file in the following file format:

7
1
<configuration>
2
    <property>
3
    <name>property_name</name>
4
    <value>property_value</value>
5
    </property>
6
    …
7
    </configuration>

For the list of configurable properties, refer to http://hbase.apache.org/ book.html#hbase_default_configurations (or view the raw /conf/hbasedefault. xml source file).

These are the most important properties:

Property Value Description
hbase.cluster.
distributed
true Set value to true when running in
distributed mode.
hbase.zookeeper.
quorum
my.zk.
server1,my.
zk.server2,
HBase depends on a running
ZooKeeper cluster. Configure
using external ZK. (If not
configured, internal instance of ZK
is started.)
hbase.rootdir hdfs://my.hdfs.
server/hbase
The directory shared by region
servers and where HBase
persists. The URL should be 'fully
qualified' to include the filesystem
scheme.


Section 3

Starting and Stopping Apache HBase

Running Modes

This is a typical cluster setup.

Mode Description
Standalone HBase does not use HDFS it uses the local filesystem
instead and it runs all HBase daemons and a local
ZooKeeper in the same JVM.
Pseudodistributed A pseudo-distributed mode is simply a distributed mode
running on a single host. Use this configuration for testing
and prototyping on HBase.
Fullydistributed Daemons are spread across all nodes in the cluster. Use
this configuration for production or for evaluating HBase
performance.

Hot Tip

To manage all nodes from master, set up passwordless ssh.

Start/Stop Commands

from master on every node CDH (on every node)
bin/start-hbase.
sh
bin/hbase master start
bin/hbase regionserver
start
service hadoop-hbasemaster
start
service hadoop-hbaseregionserver
start
bin/stop-hbase.sh bin/hbase master stop
bin/hbase regionserver
stop
service hadoop-hbasemaster
stop
service hadoop-hbaseregionserver
stop
Section 4

Using the HBase Shell

To run the HBase shell:

1
1
$ ./bin/hbase shell

For examples of scripting HBase, look for files with the .rb extension in the HBase bin directory. To run one of these scripts, do the following:

1
1
$ ./bin/hbase org.jruby.Main PATH_TO_SCRIPT
Shell Command Example Description
help Show shell help
create 'mytable', {NAME => 'colfam1',
VERSIONS => 1, TTL => 2592000,
BLOCKCACHE => true}, {NAME =>
'colfam2'}
Create a table
list List all tables
disable 'mytable'
drop 'mytable'
Drop a table
truncate 'mytable' Truncate(clear) a table: disable->
drop->create
alter 'mytable', {NAME => 'new_
colfam'}, {NAME => 'colfam_to_
delete', METHOD => 'delete'}
Alter a table / add column family
(Note: table must be disabled to
make ColumnFamily modifications)
scan 'mytable' Scan all table records
scan 'mytable', {LIMIT=>10,
STARTROW=>"start_row",
STOPROW=>"stop_row"}
Scan 10 records in a table starting
with "start_row" and ending with
"end_row" (use double quotes to
escape hex-based characters)
get 'mytable', 'row_key' Get a record
put 'mytable', 'row_key',
colfam:qual', 'value'
Add/update a record
delete 'mytable', 'row_key',
'colfam:qual'
Delete a record
major_compact 'mytable' Run minor/major compaction
split 'mytable' Split region(s)
status 'detailed' Get HBase cluster-status details
and statistics
Section 5

Learn the HBase Java API

Schema Creation

HBase schema can be created or updated using the HBaseAdmin in the Java API, as shown below:

​x
1
Configuration config = HBaseConfiguration.create();
2
    HBaseAdmin admin = new HBaseAdmin(conf);
3
    String table = "myTable";
4
​
5
    admin.disableTable(table);
6
​
7
    HColumnDescriptor cf1 = ...;
8
    admin.addColumn(table, cf1);      // adding new ColumnFamily
9
    HColumnDescriptor cf2 = ...;
10
    admin.modifyColumn(table, cf2);   // modifying existing ColumnFamily
11
​
12
    admin.enableTable(table);

HTable

HTable manages connections to the HBase table.

6
1
Configuration conf = HBaseConfiguration.create();
2
    HTable table = new HTable(conf, "mytable");
3
    table.setAutoFlush(false);
4
    table.setWriteBufferSize(2 * 1024 * 1024); // 2 Mb
5
    // … do useful stuff
6
    table.close()

Put

"Put" adds new records into the HBase table.

7
1
HTable table = ...    // instantiate HTable
2
    Put put = new Put(Bytes.toBytes("key1"));
3
    put.add(Bytes.toBytes("colfam"), Bytes.toBytes("qual"),
4
    Bytes.toBytes("my_value"));
5
    put.add(...);
6
    ...
7
    table.put(put);

Get

"Get" fetches a single record given its key.

6
1
HTable table = ...    // instantiate HTable
2
    Get get = new Get(rowKey);
3
    get.addColumn(Bytes.toBytes("colfam"),
4
    Bytes.toBytes("qual")); // to fetch specific column
5
    get.addFamily("colfam2"); // to fetch the all from column
6
    Result result = aggTable.get(get);

Delete

"Delete" deletes a single record given its key.

3
1
HTable table = ... // instantiate HTable
2
    Delete toDelete = new Delete(rowKey);
3
    table.delete(delete);

Scan

"Scan" searches through multiple rows iteratively for specified attributes.

14
1
HTable table = ...    // instantiate HTable
2
    Scan scan = new Scan();
3
    scan.addColumn(Bytes.toBytes("cf"),Bytes.toBytes("attr"));
4
    scan.setStartRow( Bytes.toBytes("row"));  // start key is inclusive
5
    scan.setStopRow( Bytes.toBytes("row" + new byte[] {0})); // stop key is
6
    exclusive
7
    ResultScanner scanner = table.getScanner(scan)
8
    try {
9
    for(Result result : scanner) {
10
    // process Result instance
11
    }
12
    } finally {
13
    scanner.close();
14
    }
Section 6

HBase Web UI: Master & Slaves

HMaster and RegionServer web interfaces are handy for checking highlevel cluster states as well as individual tables/regions and slaves details. See the Operational Management section for how to access more statistics and metrics exposed by HBase.

Port Web-Interface Contains
60010 HMaster web-interface
default port

cluster status and statistics
table's statistics, regions, information
and location
different tools to invoke compactions
and splits on regions and whole tables
various links to RegionServer's webinterface

60030 RegionServer webinterface
default port
RegionServer statistics, assigned
regions info, and access to logs
Section 7

HBase Data Model & Schema Design

Data Model

Table
Applications store data into an HBase table. Tables are made of rows and columns. Table cells — the intersection of row and column coordinates — are versioned.

Cell Value
A {row, column, version} tuple precisely specifies a cell in HBase.

Versions
It is possible to have an unbounded number of cells where the row and column are the same but the cell address differs only in its version dimension. A version is specified as a long integer. The HBase version dimension is stored in decreasing order so when reading from a store file, the most recent values are found first.

Row Key
Table row keys are also byte arrays. Therefore almost anything can serve as a row key, from strings to binary representations of longs or even serialized data structures.

Rows are lexicographically sorted with the lowest order appearing first in a table. The empty byte array is used to denote both the start and end of a table's namespace. All table accesses are via the table row key — its primary key.

Columns & Column Families
Columns in HBase are grouped into column families. All column members of a column family have the same prefix. For example, the courses:history and courses:math columns are both members of the courses column family. Physically, all column family members are stored together in the filesystem. Because tuning and storage specifications are done at the column family level, it is recommended that all column family members have the same general access pattern and size characteristics.

Schema Creation & Updating

Tables are declared up front at schema-definition time using the HBase shell or Java API (see earlier sections). Column families are defined at table-creation time. It is possible to alter a table and add new column families, but the table must be disabled at altering time.


When changes are made to either tables or column families (e.g., region size, block size), these changes take effect the next time there is a major compaction and the StoreFiles get re-written.


Row-Key Design

Try to keep row keys short because they are stored with each cell in an HBase table, thus noticeably reducing row-key size results of data needed for storing HBase data. This advice also applies to column family names. Common problems of choosing between sequential row keys and randomly distributed row keys:

Some mixed-design approaches allow fast range scans while distributing data among all clusters when writing sequential (by nature) data. One of the ready-to-use solutions is here: https://github.com/sematext/HBaseWD.

Design Solution Pros Cons
Using sequential row keys (e.g. time-series data with row key built based on timestamp) Makes it possible to perform fast range scans with help of
setting start/stop keys on Scanner
Creates single regionserver, hotspotting problems upon writing data (as row keys go in sequence, all records end up written into a single region at a time)
Using randomly
distributed row keys
(e.g. UUIDs)
Aims for fastest writing performance by distributing new records over random
regions
Does not conduct fastrange scans against written data

And here is the link to access the HBase Reference Guide: http://hbase. apache.org/book.html#rowkey.designRow-key design is essential to gaining maximum performance when using HBase to store your application's data.

Column Families

Currently, HBase does not do well with anything above two or three column families per table. With that said, keep the number of column families in your schema low. Try to make do with one column family in your schemata if you can. Only introduce a second and third column family in the case where data access is usually column-scoped; i.e. you usually query no more than a single column family at one time.


You can also set TTL (in seconds) for a column family. HBase will automatically delete rows once reaching the expiration time.

Versions

The maximum number of row versions that can be stored is configured per column family (the default is 3). This is an important parameter because HBase does not overwrite row values, but rather stores different values per row by time (and qualifier). Setting the number of maximum versions to an exceedingly high level (e.g., hundreds or more) is not a good idea because that will greatly increase StoreFile size.

The minimum number of row versions to keep can also be configured per column family (the default is 0, meaning the feature is disabled). This parameter is used together with TTL and maximum row versions parameters to allow configurations such as "keep the last T minutes worth of data of at least M versions, and at most N versions." This parameter should only be set when TTL is enabled for a column family and must be less than the number of row versions.

Data Types

HBase supports a "bytes-in/bytes-out" interface via Put and Result, so anything that can be converted to an array of bytes can be stored as a value. Input can be strings, numbers, complex objects, or even images, as long as they can be rendered as bytes.

One supported data type that deserves special mention is the "counters" type. This type enables atomic increments of numbers.

Section 8

Mapreduce with HBase

Reading from HBase

Job Configuration

The following is an example of using HBase as a MapReduce source in a read-only manner:

36
1
Configuration config = HBaseConfiguration.create();
2
​
3
    config.set(                                 // speculative
4
    “mapred.map.tasks.speculative.execution”, // execution will
5
    “false”);                                 // decrease performance
6
    // or damage the data
7
​
8
    Job job = new Job(config, “ExampleRead”);
9
    job.setJarByClass(MyReadJob.class); // class that contains mapper
10
​
11
    Scan scan = new Scan();
12
    scan.setCaching(500);         // 1 is the default in Scan,
13
    // which will be bad for MapReduce jobs
14
​
15
    scan.setCacheBlocks(false);   // don’t set to true for MR jobs
16
​
17
    // set other scan attrs
18
​
19
    ...
20
​
21
    TableMapReduceUtil.initTableMapperJob(
22
    tableName,        // input HBase table name
23
    scan,             // Scan instance to control CF and attribute selection
24
    MyMapper.class,   // mapper
25
    null,             // mapper output key
26
    null,             // mapper output value
27
    job);
28
​
29
    job.setOutputFormatClass(NullOutputFormat.class); // because we
30
    // aren’t emitting anything from mapper
31
​
32
    boolean b = job.waitForCompletion(true);
33
​
34
    if (!b) {
35
    throw new IOException(“error with job!”);
36
    }

The mapper instance would extend TableMapper, too, like this:

7
1
public static class MyMapper extends TableMapper<Text, Text> {
2
    public void map(ImmutableBytesWritable row, Result value, Context context)
3
    throws InterruptedException, IOException {
4
​
5
    // process data for the row from the Result instance.
6
    }
7
    }

Map Tasks Number
When TableInputFormat is used (set by default with TableMapReduceUtil. initTableMapperJob(...)) to read an HBase table for input to a MapReduce job, its splitter will make a map task for each region of the table. Thus, if 100 regions are in the table, there will be 100 map tasks for the job, regardless of how many column families are selected in the Scan. To implement a different behavior (custom splitters), see the method getSplits in TableInputFormatBase (either override in custom-splitter class or use as example).

Writing to HBase

Job Configuration
The following is an example of using HBase both as a source and as a sink with MapReduce:

18
1
Configuration config = ...; // configuring reading
2
​
3
    Job job = ...; // from HBase table
4
​
5
    Scan scan = ...; // is the same as in
6
​
7
    TableMapReduceUtil // read-only example
8
​
9
    .initTableMapperJob(...); // above
10
​
11
    TableMapReduceUtil.initTableReducerJob(
12
    targetTable, // output table
13
    MyTableReducer.class, // reducer class
14
    job);
15
​
16
    job.setNumReduceTasks(1); // at least one, adjust as required
17
​
18
    boolean b = job.waitForCompletion(true);

And the reducer instance would extend TableReducer, as shown here:

13
1
public static class MyTableReducer extends TableReducer<Text, IntWritable,
2
    ImmutableBytesWritable> {
3
    public void reduce(Text key, Iterable
4
​
5
    values, Context context)
6
        throws IOException, InterruptedException {
7
        ...
8
        Put put = ...; // data to be written
9
        context.write(null, put);
10
        ...
11
        }
12
        }
13
​
Section 9

HBase Performance Tuning

Details about performance tuning could take up this whole Refcard, and even more, so this section contains the most common pointers (in addition to what's mentioned in other sections). Please refer to the HBase documentation (e.g. Apache HBase Reference) for more details.

OS

  • Give HBase as much RAM as you can (hbase-env.sh).
  • Use a 64-bit platform.
  • Don't allow swapping.

Java

  • Watch for Garbage Collector behavior (avoid long stop-the-world pauses).
  • Use proven-to-work-well java versions.

HBase Configuration

Setting Comments
hbase.regionserver.handler.
count property in hbase-site.xml
Number of threads that are kept open
to answer incoming requests to user
tables. The rule of thumb is to keep this
number low (10 by default) when the
payload per request approaches the MB
(big puts, scans using a large cache),
and high when the payload is small
(gets, small puts, ICVs, deletes).
hbase.hregion.max.filesize in
hbase-site.xml
Consider going to larger regions to cut
down on the total number of regions
on your cluster. Generally, less Regions
to manage makes for a smoother
running cluster. A lower number of
regions is preferred, i.e. a number in
the range from 20 to the low-hundreds
per RegionServer. The default 256Mb is
usually too low.
compression

Consider enabling ColumnFamily
compression (on CF creation). Several
options are near-frictionless, and in
most cases they boost performance
by reducing the size of StoreFiles, thus
reducing I/O as well.

splitting One may desire to turn off automatic
splitting, e.g., for better debugging
purposes. To do so, increase:hbase.
hregion.max.filesize (e.g. 100GB).
major compactions A common administrative technique
is to manage major compactions
manually rather than letting HBase
do it. By default, HConstants.MAJOR_
COMPACTION_PERIOD is one day long.
Major compactions may kick in when
you least desire it - especially on a busy
system. To turn off automatic major
compactions, set the hbase.hregion.
majorcompaction property (hbase-site.
xml) value to 0.

Schema Design

  • Consider using Bloom Filters.
  • Consider increasing column family blocksize (default is 64KB) when cell values are large to reduce StoreFile indexes.
  • Consider defining in-memory column families.
  • Use column family compression.

Writing to HBase

  • For batch loading use the bulk-load tool, if possible. For more about this technique, go here: http://hbase.apache.org/book/arch.bulk. load.html
  • Use pre-splitting of regions when bulk loading into empty HBase table (http://hbase.apache.org/book/perf.writing.html).
  • Consider disabling WAL or use deferred WAL flushes.
  • Make sure that setAutoFlush is set to false on your HTable instance when performing a lot of Puts.
  • Watch for and avoid the RegionServer hot-spotting problem. The sign is that one RS is sweating while others are resting, i.e. an uneven load distribution while writing (hot-spotting is also mentioned in the Row-Key Design section).

Reading from HBase

  • Use bigger-than-default (1) scan caching when reading a lot of records, e.g., when using the HBase table as a source for a MapReduce job. But beware that overly aggressive caching may cause timeouts—e.g., the UnknownScannerException—if processing of records is heavy/slow
  • Narrow down Scan selection by defining column families and columns you need to fetch.
  • Close ResultScanners in code.
  • Set CacheBlocks to "false" in Scan, a source for the MapReduce job.
Section 10

Managing and Monitoring HBase

Health Check

hbck
Use hbck to check the consistency of your cluster:

1
1
$ ./bin/hbase hbck

Metrics
Use metrics exposed by Hadoop and HBase to monitor the state of your cluster. As most processes are Java processes, general JVM monitoring is useful.

You can use JConsole or any other JMX client to access metrics exposed by HBase. Some metrics are also exposed via HMaster and RegionServer web interfaces.

SPM for HBase (http://sematext.com/spm/hbase-performancemonitoring/ index.html) monitors all key HBase metrics, renders time-series performance graphs, includes alerts, etc.

The most important metrics to monitor are:

Group Metric
HBase Requests count
HBase Compactions queue
Java GC metrics
OS IO Wait
OS User CPU

Backup Options

Type Method
Full Shutdown Use HDFS distcp tool. Stop HBase distcp HBase dir in
HDFS Start HBase pointing to copied dir.
Live Set up replication between two running clusters.
Live Use CopyTable utility to copy data from one table to
another on the same cluster, or to copy data to another
table on another cluster.
Live Use Export tool (run as MapReduce job) to export
HBase table contents to HDFS. Then use Import tool to
load data into another table from the dump.

See this link — http://hbase.apache.org/book/ops.backup.html—for more about HBase backup methods.

Section 11

HBase Troubleshooting & Debugging

Use the info in the Operational Management section for preemptive cluster health checks. When bad things happen, refer to logs (usually start with HMaster logs) and/or collected metrics. This is where access to historical stats is very useful (see SPM for HBase - http://sematext.com/spm/hbaseperformance- monitoring/index.html)

Logs

Process Log Location
NameNode $HADOOP_HOME/logs/hadoop-<user>-namenode-
<hostname>.log
DataNode $HADOOP_HOME/logs/hadoop-<user>-datanode-
<hostname>.log
JobTracker $HADOOP_HOME/logs/hadoop-<user>-jobtracker-
<hostname>.log
TaskTracker $HADOOP_HOME/logs/hadoop-<user>-jobtracker-
<hostname>.log
HMaster $HBASE_HOME/logs/hbase-<user>-master-<hostname>.
log
RegionServer $HBASE_HOME/logs/hbase-<user>-regionserver-
<hostname>.log
ZooKeeper /var/log/zookeeper/zookeeper.log

To output GC logs (important for debugging RegionServer failures), uncomment (or add) the following line in the hbase-env.sh file:

Resources

http://search-hadoop.com/ search-hadoop.com indexes all mailing lists, wikis, JIRA issues, sources, code, javadocs, etc. Search here first when you have an issue. More than likely someone has already had your problem.

2
1
export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_
2
    HOME/logs/gc-hbase.log"

Mailing Lists
Ask a question on the HBase mailing lists. The 'dev' mailing list is aimed at the community of developers actually building HBase, and it is also for features currently under development. The 'user' list is generally used for questions about released versions of HBase. Before going to the mailing list, first make sure your question has not already been answered by searching the mailing list archives.

IRC
#hbase on irc.freenode.net

JIRA
JIRA is also really helpful when looking for Hadoop/HBase-specific issues at https://issues.apache.org/jira/browse/HBASE.

Section 12

HBase API for Non-Java Languages

Languages talking to JVM
The HBase wiki (see links in the end) contains examples of how to access HBase using the following languages:

  • Jython
  • Groovy
  • Scala

Languages with a custom protocol
HBase can also be accessed using the following protocols:

  • REST
  • Thrift
Section 13

Useful HBase Links & Other Sources

HBase project: http://hbase.apache.org

HBase Wiki: http://wiki.apache.org/hadoop/Hbase

Apache HBase Reference Guide: http://hbase.apache.org/book.html

Search for Hadoop Ecosystem projects (wiki, mailing lists, source code and more): http://search-hadoop.com

Monitoring for HBase: http://sematext.com/spm/hbase-performancemonitoring

Like This Refcard? Read More From DZone

related article thumbnail

DZone Article

Strategies for Securing E-Commerce Applications
related article thumbnail

DZone Article

Is Big Data Dying?
related article thumbnail

DZone Article

Comparing Managed Postgres Options on The Azure Marketplace
related article thumbnail

DZone Article

How to Use AWS Aurora Database for a Retail Point of Sale (POS) Transaction System
related refcard thumbnail

Free DZone Refcard

Open-Source Data Management Practices and Patterns
related refcard thumbnail

Free DZone Refcard

Real-Time Data Architecture Patterns
related refcard thumbnail

Free DZone Refcard

Getting Started With Real-Time Analytics
related refcard thumbnail

Free DZone Refcard

Getting Started With Apache Iceberg

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: