presto查询hudi表异常:xxx.parquet is not a valid parquet file

602 阅读2分钟
异常信息:

使用presto查询同步到hive数据库中的hudi表时抛出异常:failed: hdfs://mycluster/tmp/hudi_trips_cow/driver=driver-213/name=nihao/a358b5e4-3085-48af-9104-b509bd4318d7-0_0-149-378_20220725103103905.parquet is not a Parquet file (too small length: 0),导致查询失败。根据异常日志查看hdfs上对应的数据文件,发现此文件大小为0。hudi社区有类似的issue,但是还不知道怎么复现此问题,所有一直没有解决。经过测试,发现删除此文件不会导致数据丢失,所以想把这类文件都删除掉,最后发现我们这个表分区比较多,每个分区下都有类似的文件,手动删除比较麻烦,而且,日后还可能会生成这样的文件。所以我们决定在 查询的时候自动过滤掉这样的文件。

处理方法:

首先看下堆栈信息(在worker的日志里):

2022-07-25T10:33:30.896+0800   ERROR   SplitRunner-10-72       io.prestosql.execution.executor.TaskExecutor   Error processing Split 20220725_023324_00004_at4rj.1.0-0 {hosts=[], database=default, table=hudi_trips_cow, partitionName=driver=driver-213/name=nihao} (start = 2.1379767769959E7, wall = 767 ms, cpu = 0 ms, wait = 0 ms, calls = 1)\
java.lang.RuntimeException: hdfs://mycluster/tmp/hudi_trips_cow/driver=driver-213/name=nihao/a358b5e4-3085-48af-9104-b509bd4318d7-0_0-149-378_20220725103103905.parquet is not a Parquet file (too small length: 0)\
      at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:521)\
      at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:513)\
      at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:507)\
      at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:456)\
      at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:441)\
      at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)\
      at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:75)\
      at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:60)\
      at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:93)\
      at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:224)\
      at io.prestosql.plugin.hive.HiveUtil.createRecordReader(HiveUtil.java:268)\
      at io.prestosql.plugin.hive.GenericHiveRecordCursorProvider.lambda$createRecordCursor$0(GenericHiveRecordCursorProvider.java:72)\
      at io.prestosql.plugin.hive.authentication.UserGroupInformationUtils.lambda$executeActionInDoAs$0(UserGroupInformationUtils.java:29)\
      at java.security.AccessController.doPrivileged(Native Method)\
      at javax.security.auth.Subject.doAs(Subject.java:360)\
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1855)\
      at io.prestosql.plugin.hive.authentication.UserGroupInformationUtils.executeActionInDoAs(UserGroupInformationUtils.java:27)\
      at io.prestosql.plugin.hive.authentication.DirectHdfsAuthentication.doAs(DirectHdfsAuthentication.java:37)\
      at io.prestosql.plugin.hive.HdfsEnvironment.doAs(HdfsEnvironment.java:99)\
      at io.prestosql.plugin.hive.GenericHiveRecordCursorProvider.createRecordCursor(GenericHiveRecordCursorProvider.java:71)\
      at io.prestosql.plugin.hive.HivePageSourceProvider.createHivePageSource(HivePageSourceProvider.java:488)\
      at io.prestosql.plugin.hive.HivePageSourceProvider.createPageSourceInternal(HivePageSourceProvider.java:250)\
      at io.prestosql.plugin.hive.HivePageSourceProvider.createPageSource(HivePageSourceProvider.java:145)\
      at io.prestosql.plugin.hive.HivePageSourceProvider.createPageSource(HivePageSourceProvider.java:123)\
      at io.prestosql.spi.connector.classloader.ClassLoaderSafeConnectorPageSourceProvider.createPageSource(ClassLoaderSafeConnectorPageSourceProvider.java:55)\
      at io.prestosql.split.PageSourceManager.createPageSource(PageSourceManager.java:61)\
      at io.prestosql.operator.TableScanOperator.getOutput(TableScanOperator.java:710)\
      at io.prestosql.operator.Driver.processInternal(Driver.java:423)\
      at io.prestosql.operator.Driver.lambda$processFor$9(Driver.java:315)\
      at io.prestosql.operator.Driver.tryWithLock(Driver.java:785)\
      at io.prestosql.operator.Driver.processFor(Driver.java:308)\
      at io.prestosql.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1261)\
      at io.prestosql.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:163)\
      at io.prestosql.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:484)\
      at io.prestosql.$gen.Presto_1_5_0____20220724_220410_1.run(Unknown Source)\
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\
      at java.lang.Thread.run(Thread.java:748)\

从堆栈中可以知道是这里抛出的异常:

//org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase\
protected ParquetInputSplit getSplit(\
   final org.apache.hadoop.mapred.InputSplit oldSplit,\
   final JobConf conf\
) throws IOException {\
    ...\
     final ParquetMetadata parquetMetadata = ParquetFileReader.readFooter(jobConf, finalPath);\

经过多次调用,最终在这里抛出了异常:

//org.apache.parquet.hadoop.ParquetFileReader\
if (fileLen < (long)(ParquetFileWriter.MAGIC.length + FOOTER_LENGTH_SIZE + ParquetFileWriter.MAGIC.length)) {\
           throw new RuntimeException(filePath + " is not a Parquet file (too small length: " + fileLen + ")");\
      }

我们在读取parquet文件之前先检查下fileSize,如果不合理就不读取当前这个文件了,为了尽量减小改动造成的影响,我决定在这里修改:at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:224)。但是通过调试发现不太行,后面又尝试的别的地方,还是不太行。这里就记录下最终修改的位置吧:org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase#getSplit方法中,首先检查fileSize,如果fileSize < ParquetFileWriter.MAGIC.length + FOOTER_LENGTH_SIZE + ParquetFileWriter.MAGIC.length,则返回一个null。

总结

1.hudi的数据文件中为什么会有0字节的parquet文件,什么时候会触发这个bug,现在还没有就搞清楚。

2.在其他地方检查fileSize为什么不行,多次尝试,发现每个split对应一个经过多层封装的reader,reader不能为空,否则就会有异常。文中修改的地方可以保证封装的reader不为null,但是reaReader为null,这样就会跳过读取这个split。