首页 新闻 会员 周边 捐助

【求助】Hive向分区表导入数据File not found: File does not exist:reduce.xml

0
悬赏园豆:50 [已关闭问题] 关闭于 2015-05-21 08:45

启动hdfs正常,NN、DN、SN都正常。
启动hive只有一个runjar进程,但查询、建本地表、查表都正常。

在从本地表tb3导入分区表tb4_p时出错:

insert overwrite table tb4_p partition ( pid='p01', pname='pHive01' ) select id, name from tb3;


 日志信息:
 

15/05/16 16:32:44 [Thread-11]: DEBUG ipc.ProtobufRpcEngine: Call: getServerDefaults took 14ms
15/05/16 16:32:44 [Thread-11]: DEBUG sasl.SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:50010,DS-9dce30df-cbfc-4f8b-bee8-9300a613b9af,DISK]
15/05/16 16:32:44 [DataStreamer for file /tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/map.xml block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020]: DEBUG hdfs.DFSClient: DataStreamer block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020 sending packet packet seqno: 0 offsetInBlock: 0 lastPacketInBlock: false lastByteOffsetInBlock: 3062
15/05/16 16:32:45 [ResponseProcessor for block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020]: DEBUG hdfs.DFSClient: DFSClient seqno: 0 reply: SUCCESS downstreamAckTimeNanos: 0 flag: 0
15/05/16 16:32:45 [DataStreamer for file /tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/map.xml block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020]: DEBUG hdfs.DFSClient: DataStreamer block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020 sending packet packet seqno: 1 offsetInBlock: 3062 lastPacketInBlock: true lastByteOffsetInBlock: 3062
15/05/16 16:32:45 [ResponseProcessor for block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020]: DEBUG hdfs.DFSClient: DFSClient seqno: 1 reply: SUCCESS downstreamAckTimeNanos: 0 flag: 0
15/05/16 16:32:45 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #29
15/05/16 16:32:45 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #29
15/05/16 16:32:45 [main]: DEBUG ipc.ProtobufRpcEngine: Call: complete took 11ms
15/05/16 16:32:45 [main]: INFO log.PerfLogger: 
15/05/16 16:32:45 [main]: INFO Configuration.deprecation: mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
15/05/16 16:32:45 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #30
15/05/16 16:32:45 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #30
15/05/16 16:32:45 [main]: DEBUG ipc.ProtobufRpcEngine: Call: setReplication took 17ms
15/05/16 16:32:45 [main]: DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.LocalClientProtocolProvider
15/05/16 16:32:45 [main]: DEBUG mapreduce.Cluster: Cannot pick org.apache.hadoop.mapred.LocalClientProtocolProvider as the ClientProtocolProvider - returned null protocol
15/05/16 16:32:45 [main]: DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
15/05/16 16:32:45 [main]: DEBUG service.AbstractService: Service: org.apache.hadoop.mapred.ResourceMgrDelegate entered state INITED
15/05/16 16:32:45 [main]: DEBUG service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED
15/05/16 16:32:45 [main]: INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/05/16 16:32:45 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:136)
15/05/16 16:32:45 [main]: DEBUG ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
15/05/16 16:32:45 [main]: DEBUG ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
15/05/16 16:32:45 [main]: DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@3698f2a1
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl is started
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service org.apache.hadoop.mapred.ResourceMgrDelegate is started
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331)
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = 
15/05/16 16:32:46 [main]: DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
15/05/16 16:32:46 [main]: DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@3698f2a1
15/05/16 16:32:46 [main]: DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
15/05/16 16:32:46 [main]: DEBUG mapreduce.Cluster: Picked org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider
15/05/16 16:32:46 [main]: DEBUG exec.Utilities: Use session specified class loader
15/05/16 16:32:46 [main]: DEBUG fs.FSStatsPublisher: Initing FSStatsPublisher with : hdfs://localhost:9000/home/hive-1.1.0/warehousedir/tb4_p/pid=p01/pname=pHive01/.hive-staging_hive_2015-05-16_16-32-35_780_6620657129532699458-1/-ext-10001
15/05/16 16:32:46 [main]: DEBUG hdfs.DFSClient: /home/hive-1.1.0/warehousedir/tb4_p/pid=p01/pname=pHive01/.hive-staging_hive_2015-05-16_16-32-35_780_6620657129532699458-1/-ext-10001: masked=rwxr-xr-x
15/05/16 16:32:46 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #31
15/05/16 16:32:46 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #31
15/05/16 16:32:46 [main]: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 10ms
15/05/16 16:32:46 [main]: INFO fs.FSStatsPublisher: created : hdfs://localhost:9000/home/hive-1.1.0/warehousedir/tb4_p/pid=p01/pname=pHive01/.hive-staging_hive_2015-05-16_16-32-35_780_6620657129532699458-1/-ext-10001
15/05/16 16:32:46 [main]: DEBUG hdfs.DFSClient: /home/hive-1.1.0/warehousedir/tb4_p/pid=p01/pname=pHive01/.hive-staging_hive_2015-05-16_16-32-35_780_6620657129532699458-1/_tmp.-ext-10002: masked=rwxr-xr-x
15/05/16 16:32:46 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #32
15/05/16 16:32:46 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #32
15/05/16 16:32:46 [main]: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 13ms
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.connect(Job.java:1255)
15/05/16 16:32:46 [main]: DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.LocalClientProtocolProvider
15/05/16 16:32:46 [main]: DEBUG mapreduce.Cluster: Cannot pick org.apache.hadoop.mapred.LocalClientProtocolProvider as the ClientProtocolProvider - returned null protocol
15/05/16 16:32:46 [main]: DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service: org.apache.hadoop.mapred.ResourceMgrDelegate entered state INITED
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED
15/05/16 16:32:46 [main]: INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:136)
15/05/16 16:32:46 [main]: DEBUG ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
15/05/16 16:32:46 [main]: DEBUG ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
15/05/16 16:32:46 [main]: DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@3698f2a1
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl is started
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service org.apache.hadoop.mapred.ResourceMgrDelegate is started
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331)
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = 
15/05/16 16:32:46 [main]: DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
15/05/16 16:32:46 [main]: DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@3698f2a1
15/05/16 16:32:46 [main]: DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
15/05/16 16:32:46 [main]: DEBUG mapreduce.Cluster: Picked org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Cluster.getFileSystem(Cluster.java:162)
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
15/05/16 16:32:46 [main]: INFO exec.Utilities: PLAN PATH = hdfs://localhost:9000/tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/map.xml
15/05/16 16:32:46 [main]: DEBUG exec.Utilities: Found plan in cache for name: map.xml
15/05/16 16:32:46 [main]: INFO exec.Utilities: PLAN PATH = hdfs://localhost:9000/tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/reduce.xml
15/05/16 16:32:46 [main]: INFO exec.Utilities: ***************non-local mode***************
15/05/16 16:32:46 [main]: INFO exec.Utilities: local path = hdfs://localhost:9000/tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/reduce.xml
15/05/16 16:32:46 [main]: INFO exec.Utilities: Open file to read in plan: hdfs://localhost:9000/tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/reduce.xml
15/05/16 16:32:46 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #33
15/05/16 16:32:46 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #33
15/05/16 16:32:46 [main]: INFO exec.Utilities: File not found: File does not exist: /tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/reduce.xml
	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1803)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1774)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1710)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:586)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

15/05/16 16:32:46 [main]: INFO exec.Utilities: No plan file found: hdfs://localhost:9000/tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/reduce.xml
15/05/16 16:32:46 [main]: DEBUG mapred.ResourceMgrDelegate: getStagingAreaDir: dir=/tmp/hadoop-yarn/staging/root/.staging
15/05/16 16:32:46 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #34
15/05/16 16:32:46 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #34
15/05/16 16:32:46 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 12ms
15/05/16 16:32:46 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #35
15/05/16 16:32:46 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #35
15/05/16 16:32:46 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 8ms
15/05/16 16:32:46 [main]: DEBUG ipc.Client: The ping interval is 60000 ms.
15/05/16 16:32:46 [main]: DEBUG ipc.Client: Connecting to localhost/127.0.0.1:8032
15/05/16 16:32:47 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:48 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:49 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:50 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:51 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:52 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:53 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

一直打印...


/tmp/hive-1.1.0/scratchdir/是HDFS中临时目录,
查看HDFS中,其中只有map.xml文件,确实没有reduce.xml文件。 

参考了http://bbs.csdn.net/topics/390911781这个,但我这个环境很简单,没有hbase、mysql。没有找到原因。

求助!

威格灵的主页 威格灵 | 初学一级 | 园豆:183
提问于:2015-05-16 16:51
< >
分享
所有回答(3)
0

检测下权限

【戈多】 | 园豆:282 (菜鸟二级) | 2015-05-18 10:37

谢谢!

建表时指定的字段分隔符和导入的数据中字段值分隔符不一致。

支持(0) 反对(0) 威格灵 | 园豆:183 (初学一级) | 2015-05-21 08:44
0

建表时指定的字段分隔符和导入的数据中字段值分隔符不一致

威格灵 | 园豆:183 (初学一级) | 2015-05-21 08:44
0

哥, 早! 我遇到了同样的问题. 只不过我的是通过oozie来调度hive的时候, 出现的错误.

 '建表时指定的字段分隔符和导入的数据中字段值分隔符不一致'; 是什么意思?

~

taotaoa | 园豆:202 (菜鸟二级) | 2016-01-04 11:03
清除回答草稿
   您需要登录以后才能回答,未注册用户请先注册