Hadoop job_201411131351_0001 on master
User: hadoopJob Name: TeraSort
Job File: hdfs://pi0:54310/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201411131351_0001/job.xml
Submit Host: pi0
Submit Host Address: 10.1.10.120
Job-ACLs: All users are allowed
Job Setup: Successful
Status: Succeeded
Started at: Thu Nov 13 14:23:35 MST 2014
Finished at: Thu Nov 13 18:35:40 MST 2014
Finished in: 4hrs, 12mins, 5sec
Job Cleanup: Successful
Kind | % Complete | Num Tasks | Pending | Running | Complete | Killed | Failed/Killed Task Attempts | |
---|---|---|---|---|---|---|---|---|
map | 100.00% | 152 | 0 | 0 | 152 | 0 | 0 / 0 | |
reduce | 100.00% | 152 | 0 | 0 | 152 | 0 | 0 / 0 |
Counter | Map | Reduce | Total | |
---|---|---|---|---|
File Input Format Counters | Bytes Read | 0 | 0 | 10,000,298,372 |
Job Counters | SLOTS_MILLIS_MAPS | 0 | 0 | 24,993,499 |
Launched reduce tasks | 0 | 0 | 152 | |
Total time spent by all reduces waiting after reserving slots (ms) | 0 | 0 | 0 | |
Rack-local map tasks | 0 | 0 | 144 | |
Total time spent by all maps waiting after reserving slots (ms) | 0 | 0 | 0 | |
Launched map tasks | 0 | 0 | 152 | |
Data-local map tasks | 0 | 0 | 8 | |
SLOTS_MILLIS_REDUCES | 0 | 0 | 34,824,665 | |
File Output Format Counters | Bytes Written | 0 | 0 | 10,000,000,000 |
FileSystemCounters | FILE_BYTES_READ | 10,341,496,856 | 10,200,000,912 | 20,541,497,768 |
HDFS_BYTES_READ | 10,000,315,092 | 0 | 10,000,315,092 | |
FILE_BYTES_WRITTEN | 20,409,243,506 | 10,208,123,719 | 30,617,367,225 | |
HDFS_BYTES_WRITTEN | 0 | 10,000,000,000 | 10,000,000,000 | |
Map-Reduce Framework | Map output materialized bytes | 0 | 0 | 10,200,138,624 |
Map input records | 0 | 0 | 100,000,000 | |
Reduce shuffle bytes | 0 | 0 | 10,200,138,624 | |
Spilled Records | 0 | 0 | 300,000,000 | |
Map output bytes | 0 | 0 | 10,000,000,000 | |
Total committed heap usage (bytes) | 0 | 0 | 57,912,754,176 | |
CPU time spent (ms) | 0 | 0 | 40,328,090 | |
Map input bytes | 0 | 0 | 10,000,000,000 | |
SPLIT_RAW_BYTES | 16,720 | 0 | 16,720 | |
Combine input records | 0 | 0 | 0 | |
Reduce input records | 0 | 0 | 100,000,000 | |
Reduce input groups | 0 | 0 | 100,000,000 | |
Combine output records | 0 | 0 | 0 | |
Physical memory (bytes) snapshot | 0 | 0 | 52,945,952,768 | |
Reduce output records | 0 | 0 | 100,000,000 | |
Virtual memory (bytes) snapshot | 0 | 0 | 123,024,928,768 | |
Map output records | 0 | 0 | 100,000,000 |
Map Completion Graph
Reduce Completion Graph
Go back to JobTracker
This is Apache Hadoop release 1.2.1