run on yarn, still have this case, could you help me, thank you
2015-05-30 09:09:49,351 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Ramping down all scheduled reduces:0 2015-05-30 09:09:49,351 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Going to preempt 1 due to lack of space for maps 2015-05-30 09:09:49,352 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:500, vCores:0> 2015-05-30 09:09:49,352 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2015-05-30 09:09:50,355 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Ramping down all scheduled reduces:0 2015-05-30 09:09:50,355 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Going to preempt 1 due to lack of space for maps 2015-05-30 09:09:50,355 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:500, vCores:0> 2015-05-30 09:09:50,355 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2015-05-30 09:09:51,357 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Ramping down all scheduled reduces:0 2015-05-30 09:09:51,357 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Going to preempt 1 due to lack of space for maps 2015-05-30 09:09:51,357 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:500, vCores:0> 2015-05-30 09:09:51,357 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met.
你这好像job的状态一直是run,但是map没有执行,应该是RM资源分配不够。
另外, 你是不懂中文还是只会英语? 老外?
如果是老外的话。
Here we are, In cnblogs, can you speak chinese?
重新分配一下hadoop的内存资源吧~ 可以试试看。
大哥,你们是对的,默认的虚拟内存太高了。
问题已经解决
@xiaojiongen: 要养成一个精彩结贴的好习惯 - -
内存耗尽了。