Hadoop is not an island. To deliver a complete Big Data solution, a data pipeline needs to be developed that incorporates and orchestrates many diverse technologies. A Hadoop focused data pipeline not only needs to coordinate the running of multiple Hadoop jobs (MapReduce, Hive, Pig or Cascading), but also encompass real-time data acquisition and the analysis of reduced data sets extracted into relational/NoSQL databases or dedicated analytical engines.
Using an example of real-time event processing (Twitter), in this session we will demonstrate how to build manageable and robust pipeline solutions around BigData using Open Source software such as Apache Hadoop, Cascading, Spring Batch & Integration and Redis.