Below you will find pages that utilize the taxonomy term “Storm”
Apache Storm and ulimits
Running Apache Storm in production requires increasing the nofile and nproc default ulimits.
Using HBase within Storm
There is a lot of documentation around Apache Storm and Apache HBase but not so much about how to use the hbase-client inside of storm. In this post, I’ll outline:
- Information about my dev environment
- Setting up your Storm project to use the HBase client
- Managing connections to HBase in Storm
- Reading one row (Get)
- Reading many rows (Scan)
- Writing one row (Put)
- Writing many rows in a batch of Puts
Please note, this post assumes you already are comfortable with Storm and HBase terminology. If you are just starting out with Storm, take a look at my example project on GitHub: storm-stlhug-demo.
Also, an option to consider when writing to HBase from storm is storm-hbase and it is a great way to start streaming data into hbase. However, if you need to write to multiple tables or get into more advanced scenarios you will need to understand how to write your own HBase bolts.
Getting started with Storm: Logging
Logging within storm uses Simple Logging Facade for Java (SLF4J).
Tick tuples within Storm
Tick tuples are useful when you want to execute some logic within your bolts on a schedule. For example, refreshing a cache every 5 minutes.
Within your bolt, first enable receiving the tick tuples and configure how often you want to receive them:
@Override
public Map<String, Object> getComponentConfiguration() {
// configure how often a tick tuple will be sent to our bolt
Config conf = new Config();
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 300);
return conf;
}
Next create the isTickTuple helper method to determine whether or not we’ve received a tick tuple: