读取集群中HDFS上的文件报错Error: java.io.IOException: Filesystem closed at org.apache.hadoo

260 阅读1分钟

持续创作,加速成长!这是我参与「掘金日新计划 · 6 月更文挑战」的第14天,点击查看活动详情

 在MapReduce中读取HDFS上的文件时报错:

22/06/07 10:11:18 INFO mapreduce.Job:  map 0% reduce 0%
22/06/07 10:11:22 INFO mapreduce.Job: Task Id : attempt_1654600519273_0003_m_000000_0, Status : FAILED
Error: java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:817)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:860)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:926)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.fillBuffer(UncompressedSplitLineReader.java:62)
        at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
        at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
        at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.readLine(UncompressedSplitLineReader.java:94)
        at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)
        at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

这是因为每个datanode使用的是同一个Configuration对象,节点在访问文件系统时会根据Configuration对象创建一个FileSystem实例(我的自定义Mapper中的代码):

创建的这个实例默认会存在缓存中(由于下图中的源码),所以每个节点来读取时使用的就是同一个实例,这样如果一个节点读完了文件,然后关闭流,下一个节点来读取时就会报错。

 而如果我们不添加配置禁用缓存,那么默认是false:

所以需要对Configuration对象添加一个配置,在读取hdfs文件时禁用缓存:

configuration.setBoolean("fs.hdfs.impl.disable.cache", true);

这样报错就消失了。 ​