Linux spark-bench 4.4.0-93-generic #116-Ubuntu SMP Fri Aug 11 16:31:47 UTC 2017 aarch64 aarch64 aarch64 GNU/Linux patching args= Parsing conf: /root/HiBench/conf/hadoop.conf Parsing conf: /root/HiBench/conf/hibench.conf Parsing conf: /root/HiBench/conf/spark.conf Parsing conf: /root/HiBench/conf/workloads/micro/terasort.conf probe sleep jar: /root/bench/t/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.5-tests.jar start HadoopPrepareTerasort bench hdfs rm -r: /root/bench/t/hadoop/bin/hadoop --config /root/bench/t/hadoop/etc/hadoop fs -rm -r -skipTrash hdfs://spark-bench:9000/HiBench/Terasort/Input Deleted hdfs://spark-bench:9000/HiBench/Terasort/Input Submit MapReduce Job: /root/bench/t/hadoop/bin/hadoop --config /root/bench/t/hadoop/etc/hadoop jar /root/bench/t/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.5.jar teragen -D mapreduce.job.maps=8 -D mapreduce.job.reduces=8 32000 hdfs://spark-bench:9000/HiBench/Terasort/Input 17/12/21 02:09:59 INFO client.RMProxy: Connecting to ResourceManager at spark-bench/172.17.0.2:8032 17/12/21 02:10:00 INFO terasort.TeraSort: Generating 32000 using 8 17/12/21 02:10:00 INFO mapreduce.JobSubmitter: number of splits:8 17/12/21 02:10:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1513072568065_0024 17/12/21 02:10:01 INFO impl.YarnClientImpl: Submitted application application_1513072568065_0024 17/12/21 02:10:01 INFO mapreduce.Job: The url to track the job: http://spark-bench:8088/proxy/application_1513072568065_0024/ 17/12/21 02:10:01 INFO mapreduce.Job: Running job: job_1513072568065_0024 17/12/21 02:10:09 INFO mapreduce.Job: Job job_1513072568065_0024 running in uber mode : false 17/12/21 02:10:09 INFO mapreduce.Job: map 0% reduce 0% 17/12/21 02:10:16 INFO mapreduce.Job: map 13% reduce 0% 17/12/21 02:10:17 INFO mapreduce.Job: map 25% reduce 0% 17/12/21 02:10:20 INFO mapreduce.Job: map 38% reduce 0% 17/12/21 02:10:21 INFO mapreduce.Job: map 50% reduce 0% 17/12/21 02:10:22 INFO mapreduce.Job: map 63% reduce 0% 17/12/21 02:10:23 INFO mapreduce.Job: map 88% reduce 0% 17/12/21 02:10:24 INFO mapreduce.Job: map 100% reduce 0% 17/12/21 02:10:24 INFO mapreduce.Job: Job job_1513072568065_0024 completed successfully 17/12/21 02:10:24 INFO mapreduce.Job: Counters: 32 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=971136 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=662 HDFS: Number of bytes written=3200000 HDFS: Number of read operations=32 HDFS: Number of large read operations=0 HDFS: Number of write operations=16 Job Counters Killed map tasks=1 Launched map tasks=8 Other local map tasks=8 Total time spent by all maps in occupied slots (ms)=49870 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=49870 Total vcore-milliseconds taken by all map tasks=49870 Total megabyte-milliseconds taken by all map tasks=51066880 Map-Reduce Framework Map input records=32000 Map output records=32000 Input split bytes=662 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=1378 CPU time spent (ms)=9990 Physical memory (bytes) snapshot=1489547264 Virtual memory (bytes) snapshot=14745296896 Total committed heap usage (bytes)=1202192384 org.apache.hadoop.examples.terasort.TeraGen$Counters CHECKSUM=68613941816777 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=3200000 finish HadoopPrepareTerasort bench patching args= Parsing conf: /root/HiBench/conf/hadoop.conf Parsing conf: /root/HiBench/conf/hibench.conf Parsing conf: /root/HiBench/conf/spark.conf Parsing conf: /root/HiBench/conf/workloads/micro/terasort.conf probe sleep jar: /root/bench/t/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.5-tests.jar start HadoopTerasort bench hdfs rm -r: /root/bench/t/hadoop/bin/hadoop --config /root/bench/t/hadoop/etc/hadoop fs -rm -r -skipTrash hdfs://spark-bench:9000/HiBench/Terasort/Output Deleted hdfs://spark-bench:9000/HiBench/Terasort/Output hdfs du -s: /root/bench/t/hadoop/bin/hadoop --config /root/bench/t/hadoop/etc/hadoop fs -du -s hdfs://spark-bench:9000/HiBench/Terasort/Input Submit MapReduce Job: /root/bench/t/hadoop/bin/hadoop --config /root/bench/t/hadoop/etc/hadoop jar /root/bench/t/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.5.jar terasort -D mapreduce.job.reduces=8 hdfs://spark-bench:9000/HiBench/Terasort/Input hdfs://spark-bench:9000/HiBench/Terasort/Output 17/12/21 02:10:34 INFO terasort.TeraSort: starting 17/12/21 02:10:35 INFO input.FileInputFormat: Total input paths to process : 8 Spent 201ms computing base-splits. Spent 5ms computing TeraScheduler splits. Computing input splits took 208ms Sampling 8 splits of 8 Making 8 from 32000 sampled records Computing parititions took 542ms Spent 754ms computing partitions. 17/12/21 02:10:36 INFO client.RMProxy: Connecting to ResourceManager at spark-bench/172.17.0.2:8032 17/12/21 02:10:36 INFO mapreduce.JobSubmitter: number of splits:8 17/12/21 02:10:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1513072568065_0025 17/12/21 02:10:37 INFO impl.YarnClientImpl: Submitted application application_1513072568065_0025 17/12/21 02:10:37 INFO mapreduce.Job: The url to track the job: http://spark-bench:8088/proxy/application_1513072568065_0025/ 17/12/21 02:10:37 INFO mapreduce.Job: Running job: job_1513072568065_0025 17/12/21 02:10:45 INFO mapreduce.Job: Job job_1513072568065_0025 running in uber mode : false 17/12/21 02:10:45 INFO mapreduce.Job: map 0% reduce 0% 17/12/21 02:10:55 INFO mapreduce.Job: map 75% reduce 0% 17/12/21 02:11:00 INFO mapreduce.Job: map 100% reduce 0% 17/12/21 02:11:04 INFO mapreduce.Job: map 100% reduce 13% 17/12/21 02:11:05 INFO mapreduce.Job: map 100% reduce 25% 17/12/21 02:11:08 INFO mapreduce.Job: map 100% reduce 38% 17/12/21 02:11:09 INFO mapreduce.Job: map 100% reduce 63% 17/12/21 02:11:10 INFO mapreduce.Job: map 100% reduce 75% 17/12/21 02:11:12 INFO mapreduce.Job: map 100% reduce 88% 17/12/21 02:11:13 INFO mapreduce.Job: map 100% reduce 100% 17/12/21 02:11:13 INFO mapreduce.Job: Job job_1513072568065_0025 completed successfully 17/12/21 02:11:13 INFO mapreduce.Job: Counters: 50 File System Counters FILE: Number of bytes read=3328664 FILE: Number of bytes written=8622584 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=3200992 HDFS: Number of bytes written=3200000 HDFS: Number of read operations=48 HDFS: Number of large read operations=0 HDFS: Number of write operations=16 Job Counters Killed map tasks=1 Launched map tasks=8 Launched reduce tasks=8 Data-local map tasks=8 Total time spent by all maps in occupied slots (ms)=55452 Total time spent by all reduces in occupied slots (ms)=57024 Total time spent by all map tasks (ms)=55452 Total time spent by all reduce tasks (ms)=57024 Total vcore-milliseconds taken by all map tasks=55452 Total vcore-milliseconds taken by all reduce tasks=57024 Total megabyte-milliseconds taken by all map tasks=56782848 Total megabyte-milliseconds taken by all reduce tasks=58392576 Map-Reduce Framework Map input records=32000 Map output records=32000 Map output bytes=3264000 Map output materialized bytes=3328384 Input split bytes=992 Combine input records=0 Combine output records=0 Reduce input groups=32000 Reduce shuffle bytes=3328384 Reduce input records=32000 Reduce output records=32000 Spilled Records=64000 Shuffled Maps =64 Failed Shuffles=0 Merged Map outputs=64 GC time elapsed (ms)=3037 CPU time spent (ms)=25380 Physical memory (bytes) snapshot=3812827136 Virtual memory (bytes) snapshot=29539627008 Total committed heap usage (bytes)=2788163584 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=3200000 File Output Format Counters Bytes Written=3200000 17/12/21 02:11:13 INFO terasort.TeraSort: done finish HadoopTerasort bench