Abstract
While companies' usage of big data products increases, the question of which big data architecture is the most suitable to the company's needs is rising. This study presents an approach of running multiple processes which simulates preliminary data processing of sale transactions input dataset using Apache Pig, in order to find the best performing big data environment in terms of decentralization level over the HDFS. The case study approach can provide companies an additional tool for understanding the required investment on hardware or cloud computing resources. We analyze which decentralization level achieves the best processing time, and explore the behavior of performance's change according to the change in decentralization level and performance change according to the change in the size of the input dataset. The case study's insights are: When processing the same data flow over the same input dataset, processing time performance is better as long as decentralization level increases; As long as decentralization level increases the change between performances decreases significantly; Processing the same Pig data flow under the same scale of decentralization level over large input dataset performs better then processing it over a smaller input dataset-in terms of processing time per volume unit; As blocks-data nodes ratio becomes higher, the processing time becomes longer, and vice versa.
Original language | English |
---|---|
Pages (from-to) | 429-440 |
Number of pages | 12 |
Journal | International Journal of Software Engineering and its Applications |
Volume | 10 |
Issue number | 11 |
DOIs | |
State | Published - 2016 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2016 SERSC.
Keywords
- Big Data
- HDFS
- Hadoop
- Performance
- Pig
ASJC Scopus subject areas
- Software