My goal was to create a process for importing data into Hive using Sqoop 1.4.6. It needs to be simple (or easily automated) and use a robust file format.
Importing data from a Relational Database into Hive should be easy. It’s not. When you’re first getting started there are a lot of snags along the way including configuration issues, missing jar files, formatting problems, schema issues, data type conversion, and … the list goes on. This post shines some light on a way to use command line tools to import data as Avro files and create Hive schemas plus solutions for some of the problems along the way.
Spark includes the ability to write multiple different file formats to HDFS. One of those is ORC which is columnar file format featuring great compression and improved query performance through Hive.
As we gradually replace regular windows command line with powershell, it will be useful to set up a powershell environment for Java / Maven development.
For SQL Server Reporting Services (SSRS) reports there is the ability to create subscriptions. Subscriptions can be scheduled to run on a certain schedule to send emails, export reports to SharePoint document libraries, or save to windows file shares.
Depending on your requirement, you may need to grant or remove access to the subscribe action and can be managed by creating or editing the default SharePoint roles: Read, Contribute, Full Control, etc.
Please note, this post assumes you already are comfortable with Storm and HBase terminology. If you are just starting out with Storm, take a look at my example project on GitHub: storm-stlhug-demo.
Also, an option to consider when writing to HBase from storm is storm-hbase and it is a great way to start streaming data into hbase. However, if you need to write to multiple tables or get into more advanced scenarios you will need to understand how to write your own HBase bolts.
I ran into some trouble executing a simple MapReduce program on TEZ. I kept reading about the special “-Dmapreduce.framework.name=yarn-tez” parameter you could pass to your MR job to make it automatically switch frameworks without modifying the configs for your entire cluster but couldn’t get the argument to be parsed correctly.
After doing some research, I found that your MapReduce class must extend Configured and implement Tool. In addition to parsing the generic arguments correctly, you also get arguments parsed for you automatically.
A few notes from playing with zeppelin on the Hortonworks HDP 2.3 sandbox.
Logging within storm uses Simple Logging Facade for Java (SLF4J).
A list of SharePoint 2013 LCIDs along with their language and the time format.
Tinkers’ Construct is a mod for Minecraft. Tinkers’ Construct Materials The following materials are what you use to build your tools. Name Durability Handle Modifier Full Tool Durability Mining Speed Mining Level Base Attack Material Trait Blue Slime 1200 2.0x 2400 1.5 0 (Stone) 0 Hearts Slimy Green Slime 750 1.5x 500 1.5 0 (Stone) 0 Hearts Slimy Paper 30 0.