We just set a new record for the 10GB terasort on a 5-node PicoCluster! We cut over an our off the benchmark time bringing the total to under 3 hours! Pretty amazing!
Hadoop job_201412181311_0002 on master User: hadoopJob Name: TeraSort
Job File: hdfs://pi0:54310/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201412181311_0002/job.xml
Submit Host: pi0
Submit Host Address: 10.1.10.120
Job-ACLs: All users are allowed
Job Setup: Successful
Status: Succeeded
Started at: Thu Dec 18 14:54:20 MST 2014
Finished at: Thu Dec 18 17:47:16 MST 2014
Finished in: 2hrs, 52mins, 56sec
Job Cleanup: Successful
Kind | % Complete | Num Tasks | Pending | Running | Complete | Killed | Failed/Killed Task Attempts | |
---|---|---|---|---|---|---|---|---|
map | 100.00% | 80 | 0 | 0 | 80 | 0 | 0 / 0 | |
reduce | 100.00% | 80 | 0 | 0 | 80 | 0 | 0 / 0 |
Counter | Map | Reduce | Total | |
---|---|---|---|---|
Map-Reduce Framework | Spilled Records | 0 | 0 | 300,000,000 |
Map output materialized bytes | 0 | 0 | 10,200,038,400 | |
Reduce input records | 0 | 0 | 100,000,000 | |
Virtual memory (bytes) snapshot | 0 | 0 | 46,356,074,496 | |
Map input records | 0 | 0 | 100,000,000 | |
SPLIT_RAW_BYTES | 8,800 | 0 | 8,800 | |
Map output bytes | 0 | 0 | 10,000,000,000 | |
Reduce shuffle bytes | 0 | 0 | 10,200,038,400 | |
Physical memory (bytes) snapshot | 0 | 0 | 32,931,528,704 | |
Map input bytes | 0 | 0 | 10,000,000,000 | |
Reduce input groups | 0 | 0 | 100,000,000 | |
Combine output records | 0 | 0 | 0 | |
Reduce output records | 0 | 0 | 100,000,000 | |
Map output records | 0 | 0 | 100,000,000 | |
Combine input records | 0 | 0 | 0 | |
CPU time spent (ms) | 0 | 0 | 27,827,080 | |
Total committed heap usage (bytes) | 0 | 0 | 32,344,113,152 | |
File Input Format Counters | Bytes Read | 0 | 0 | 10,000,144,320 |
FileSystemCounters | HDFS_BYTES_READ | 10,000,153,120 | 0 | 10,000,153,120 |
FILE_BYTES_WRITTEN | 20,404,679,750 | 10,204,290,230 | 30,608,969,980 | |
FILE_BYTES_READ | 10,265,248,834 | 10,200,000,960 | 20,465,249,794 | |
HDFS_BYTES_WRITTEN | 0 | 10,000,000,000 | 10,000,000,000 | |
File Output Format Counters | Bytes Written | 0 | 0 | 10,000,000,000 |
Job Counters | Launched map tasks | 0 | 0 | 80 |
Launched reduce tasks | 0 | 0 | 80 | |
SLOTS_MILLIS_REDUCES | 0 | 0 | 28,079,434 | |
Total time spent by all reduces waiting after reserving slots (ms) | 0 | 0 | 0 | |
SLOTS_MILLIS_MAPS | 0 | 0 | 22,051,330 | |
Total time spent by all maps waiting after reserving slots (ms) | 0 | 0 | 0 | |
Rack-local map tasks | 0 | 0 | 30 | |
Data-local map tasks | 0 | 0 | 50 |
Map Completion Graph - close
Reduce Completion Graph - close
Go back to JobTracker
This is Apache Hadoop release 1.2.1