轻松解决 Ambari 安装 Ranger 过程中遇到的问题
安装 Ranger 的过程中遇到了空白错误?本指南将带你通过删除冗余注释、调整系统参数和修改 MySQL 配置,逐个击破这些问题,确保顺利安装和启动 Ranger。
问题
ambari 安装ranger,准备阶段报空白错误
分析
查看http请求
curl "http://10.144.187.203:8080/api/v1/clusters/xjyb" ^
-X "PUT" ^
-H "Accept: text/plain, */*; q=0.01" ^
-H "Accept-Language: zh-CN,zh;q=0.9" ^
-H "Cache-Control: no-cache" ^
-H "Connection: keep-alive" ^
-H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8" ^
-H "Cookie: AMBARISESSIONID=node0h0czekncu2cb1pjkgu8vbz7yh1.node0" ^
-H "Origin: http://10.144.187.203:8080" ^
-H "Pragma: no-cache" ^
-H "Referer: http://10.144.187.203:8080/" ^
-H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" ^
-H "X-Requested-By: X-Requested-By" ^
-H "X-Requested-With: XMLHttpRequest" ^
--data-raw "^[^{^\^"Clusters^\^":^{^\^"desired_config^\^":^[^{^\^"type^\^":^\^"admin-log4j^\^",^\^"properties^\^":^{^\^"content^\^":^\^"^\^\n^#^\^\n^# Licensed to the Apache Software Foundation (ASF) under one^\^\n^# or more contributor license agreements. See the NOTICE file^\^\n^# distributed with this work for additional information^\^\n^# regarding copyright ownership. The ASF licenses this file^\^\n^# to you under the Apache License, Version 2.0 (the^\^\n^# ^\^\^\^"License^\^\^\^"); you may not use this file except in compliance^\^\n^# with the License. You may obtain a copy of the License at^\^\n^#^\^\n^# http://www.apache.org/licenses/LICENSE-2.0^\^\n^#^\^\n^# Unless required by applicable law or agreed to in writing, software^\^\n^# distributed under the License is distributed on an ^\^\^\^"AS IS^\^\^\^" BASIS,^\^\n^# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.^\^\n^# See the License for the specific language governing permissions and^\^\n^# limitations under the License.^\^\n^#^\^\n^\^\n^\^\nlog4j.rootLogger = warn,xa_log_appender^\^\n^\^\n^\^\n^# xa_logger^\^\nlog4j.appender.xa_log_appender=org.apache.log4j.DailyRollingFileAppender^\^\nlog4j.appender.xa_log_appender.file=^$^{logdir^}/xa_portal.log^\^\nlog4j.appender.xa_log_appender.datePattern='.'yyyy-MM-dd^\^\nlog4j.appender.xa_log_appender.append=true^\^\nlog4j.appender.xa_log_appender.layout=org.apache.log4j.PatternLayout^\^\nlog4j.appender.xa_log_appender.layout.ConversionPattern=^%^d^{ISO8601^} ^[^%^t^] ^%-5p ^%^C^{6^} (^%^F:^%^L) - ^%^m^%^n^\^\nlog4j.appender.xa_log_appender.MaxFileSize=^{^{ranger_xa_log_maxfilesize^}^}MB^\^\n^\^\n^# xa_log_appender : category and additivity^\^\nlog4j.category.org.springframework=warn,xa_log_appender^\^\nlog4j.additivity.org.springframework=false^\^\n^\^\nlog4j.category.org.apache.ranger=info,xa_log_appender^\^\nlog4j.additivity.org.apache.ranger=false^\^\n^\^\nlog4j.category.xa=info,xa_log_appender^\^\nlog4j.additivity.xa=false^\^\n^\^\n^# perf_logger^\^\nlog4j.appender.perf_appender=org.apache.log4j.DailyRollingFileAppender^\^\nlog4j.appender.perf_appender.file=^$^{logdir^}/ranger_admin_perf.log^\^\nlog4j.appender.perf_appender.datePattern='.'yyyy-MM-dd^\^\nlog4j.appender.perf_appender.append=true^\^\nlog4j.appender.perf_appender.layout=org.apache.log4j.PatternLayout^\^\nlog4j.appender.perf_appender.layout.ConversionPattern=^%^d^{ISO8601^} ^[^%^t^] ^%^m^%^n^\^\n^\^\n^\^\n^# sql_appender^\^\nlog4j.appender.sql_appender=org.apache.log4j.DailyRollingFileAppender^\^\nlog4j.appender.sql_appender.file=^$^{logdir^}/xa_portal_sql.log^\^\nlog4j.appender.sql_appender.datePattern='.'yyyy-MM-dd^\^\nlog4j.appender.sql_appender.append=true^\^\nlog4j.appender.sql_appender.layout=org.apache.log4j.PatternLayout^\^\nlog4j.appender.sql_appender.layout.ConversionPattern=^%^d^{ISO8601^} ^[^%^t^] ^%-5p ^%^C^{6^} (^%^F:^%^L) - ^%^m^%^n^\^\n^\^\n^# sql_appender : category and additivity^\^\nlog4j.category.org.hibernate.SQL=warn,sql_appender^\^\nlog4j.additivity.org.hibernate.SQL=false^\^\n^\^\nlog4j.category.jdbc.sqlonly=fatal,sql_appender^\^\nlog4j.additivity.jdbc.sqlonly=false^\^\n^\^\nlog4j.category.jdbc.sqltiming=warn,sql_appender^\^\nlog4j.additivity.jdbc.sqltiming=false^\^\n^\^\nlog4j.category.jdbc.audit=fatal,sql_appender^\^\nlog4j.additivity.jdbc.audit=false^\^\n^\^\nlog4j.category.jdbc.resultset=fatal,sql_appender^\^\nlog4j.additivity.jdbc.resultset=false^\^\n^\^\nlog4j.category.jdbc.connection=fatal,sql_appender^\^\nlog4j.additivity.jdbc.connection=false^\^",^\^"ranger_xa_log_maxbackupindex^\^":^\^"20^\^",^\^"ranger_xa_log_maxfilesize^\^":^\^"256^\^"^},^\^"service_config_version_note^\^":^\^"Ranger^初^始^化^配^置^\^"^},^{^\^"type^\^":^\^"admin-properties^\^",^\^"properties^\^":^{^\^"DB_FLAVOR^\^":^\^"MYSQL^\^",^\^"PATCH_RETRY_INTERVAL^\^":^\^"120^\^",^\^"SQL_CONNECTOR_JAR^\^":^\^"^{^{driver_curl_target^}^}^\^",^\^"db_host^\^":^\^"10.144.187.203^\^",^\^"db_name^\^":^\^"ranger^\^",^\^"db_password^\^":^\^"Xjyb2024^\^",^\^"db_root_password^\^":^\^"^\^",^\^"db_root_user^\^":^\^"root^\^",^\^"db_user^\^":^\^"ranger^\^",^\^"policymgr_external_url^\^":^\^"http://hadoop-007:6080^\^"^},^\^"service_config_version_note^\^":^\^"Ranger^初^始^化^配^置^\^"^},^{^\^"type^\^":^\^"ranger-admin-site^\^",^\^"properties^\^":^{^\^"ranger.admin.kerberos.cookie.domain^\^":^\^"^{^{ranger_host^}^}^\^",^\^"ranger.admin.kerberos.cookie.path^\^":^\^"/^\^",^\^"ranger.admin.kerberos.keytab^\^":^\^"^\^",^\^"ranger.admin.kerberos.principal^\^":^\^"^\^",^\^"ranger.admin.kerberos.token.valid.seconds^\^":^\^"30^\^",^\^"ranger.audit.elasticsearch.index^\^":^\^"ranger_audit_elasticsearch^\^",^\^"ranger.audit.elasticsearch.password^\^":^\^"elasticsearch^\^",^\^"ranger.audit.elasticsearch.port^\^":^\^"9200^\^",^\^"ranger.audit.elasticsearch.protocol^\^":^\^"http^\^",^\^"ranger.audit.elasticsearch.urls^\^":^\^"NONE^\^",^\^"ranger.audit.elasticsearch.user^\^":^\^"elasticsearch^\^",^\^"ranger.audit.solr.bootstrap.enabled^\^":^\^"false^\^",^\^"ranger.audit.solr.password^\^":^\^"NONE^\^",^\^"ranger.audit.solr.urls^\^":^\^"^\^",^\^"ranger.audit.solr.username^\^":^\^"ranger_solr^\^",^\^"ranger.audit.solr.zookeepers^\^":^\^"NONE^\^",^\^"ranger.audit.source.type^\^":^\^"solr^\^",^\^"ranger.authentication.allow.trustedproxy^\^":^\^"false^\^",^\^"ranger.authentication.method^\^":^\^"UNIX^\^",^\^"ranger.credential.provider.path^\^":^\^"/etc/ranger/admin/rangeradmin.jceks^\^",^\^"ranger.externalurl^\^":^\^"^{^{ranger_external_url^}^}^\^",^\^"ranger.https.attrib.keystore.file^\^":^\^"/etc/security/serverKeys/ranger-admin-keystore.jks^\^",^\^"ranger.is.solr.kerberised^\^":^\^"^{^{ranger_is_solr_kerberised^}^}^\^",^\^"ranger.jpa.jdbc.credential.alias^\^":^\^"rangeradmin^\^",^\^"ranger.jpa.jdbc.dialect^\^":^\^"^{^{jdbc_dialect^}^}^\^",^\^"ranger.jpa.jdbc.driver^\^":^\^"com.mysql.jdbc.Driver^\^",^\^"ranger.jpa.jdbc.password^\^":^\^"_^\^",^\^"ranger.jpa.jdbc.url^\^":^\^"jdbc:mysql://10.144.187.203:3306/ranger?useSSL=false^\^",^\^"ranger.jpa.jdbc.user^\^":^\^"^{^{ranger_db_user^}^}^\^",^\^"ranger.kms.service.user.hdfs^\^":^\^"ocdp^\^",^\^"ranger.kms.service.user.hive^\^":^\^"ocdp^\^",^\^"ranger.ldap.ad.base.dn^\^":^\^"dc=example,dc=com^\^",^\^"ranger.ldap.ad.bind.dn^\^":^\^"^{^{ranger_ug_ldap_bind_dn^}^}^\^",^\^"ranger.ldap.ad.bind.password^\^":^\^"^\^",^\^"ranger.ldap.ad.binddn.credential.alias^\^":^\^"ranger.ldap.ad.bind.password^\^",^\^"ranger.ldap.ad.domain^\^":^\^"^\^",^\^"ranger.ldap.ad.referral^\^":^\^"ignore^\^",^\^"ranger.ldap.ad.url^\^":^\^"^{^{ranger_ug_ldap_url^}^}^\^",^\^"ranger.ldap.ad.user.searchfilter^\^":^\^"(sAMAccountName=^{0^})^\^",^\^"ranger.ldap.base.dn^\^":^\^"dc=example,dc=com^\^",^\^"ranger.ldap.bind.dn^\^":^\^"^{^{ranger_ug_ldap_bind_dn^}^}^\^",^\^"ranger.ldap.bind.password^\^":^\^"^\^",^\^"ranger.ldap.binddn.credential.alias^\^":^\^"ranger.ldap.bind.password^\^",^\^"ranger.ldap.group.roleattribute^\^":^\^"cn^\^",^\^"ranger.ldap.group.searchbase^\^":^\^"^{^{ranger_ug_ldap_group_searchbase^}^}^\^",^\^"ranger.ldap.group.searchfilter^\^":^\^"^{^{ranger_ug_ldap_group_searchfilter^}^}^\^",^\^"ranger.ldap.referral^\^":^\^"ignore^\^",^\^"ranger.ldap.starttls^\^":^\^"false^\^",^\^"ranger.ldap.url^\^":^\^"^{^{ranger_ug_ldap_url^}^}^\^",^\^"ranger.ldap.user.dnpattern^\^":^\^"uid=^{0^},ou=users,dc=xasecure,dc=net^\^",^\^"ranger.ldap.user.searchfilter^\^":^\^"(uid=^{0^})^\^",^\^"ranger.logs.base.dir^\^":^\^"/var/log/ranger/admin^\^",^\^"ranger.lookup.kerberos.keytab^\^":^\^"^\^",^\^"ranger.lookup.kerberos.principal^\^":^\^"^\^",^\^"ranger.plugins.atlas.serviceuser^\^":^\^"atlas^\^",^\^"ranger.plugins.hbase.serviceuser^\^":^\^"hbase^\^",^\^"ranger.plugins.hdfs.serviceuser^\^":^\^"ocdp^\^",^\^"ranger.plugins.hive.serviceuser^\^":^\^"ocdp^\^",^\^"ranger.plugins.kafka.serviceuser^\^":^\^"kafka^\^",^\^"ranger.plugins.kms.serviceuser^\^":^\^"kms^\^",^\^"ranger.plugins.knox.serviceuser^\^":^\^"knox^\^",^\^"ranger.plugins.storm.serviceuser^\^":^\^"storm^\^",^\^"ranger.plugins.yarn.serviceuser^\^":^\^"ocdp^\^",^\^"ranger.service.host^\^":^\^"^{^{ranger_host^}^}^\^",^\^"ranger.service.http.enabled^\^":^\^"true^\^",^\^"ranger.service.http.port^\^":^\^"6080^\^",^\^"ranger.service.https.attrib.clientAuth^\^":^\^"want^\^",^\^"ranger.service.https.attrib.keystore.credential.alias^\^":^\^"keyStoreCredentialAlias^\^",^\^"ranger.service.https.attrib.keystore.keyalias^\^":^\^"rangeradmin^\^",^\^"ranger.service.https.attrib.keystore.pass^\^":^\^"xasecure^\^",^\^"ranger.service.https.attrib.ssl.enabled^\^":^\^"false^\^",^\^"ranger.service.https.port^\^":^\^"6182^\^",^\^"ranger.spnego.kerberos.keytab^\^":^\^"^\^",^\^"ranger.spnego.kerberos.principal^\^":^\^"*^\^",^\^"ranger.sso.browser.useragent^\^":^\^"Mozilla,chrome^\^",^\^"ranger.sso.enabled^\^":^\^"false^\^",^\^"ranger.sso.providerurl^\^":^\^"^\^",^\^"ranger.sso.publicKey^\^":^\^"^\^",^\^"ranger.truststore.alias^\^":^\^"trustStoreAlias^\^",^\^"ranger.truststore.file^\^":^\^"/etc/ranger/admin/conf/ranger-admin-keystore.jks^\^",^\^"ranger.truststore.password^\^":^\^"changeit^\^",^\^"ranger.unixauth.remote.login.enabled^\^":^\^"true^\^",^\^"ranger.unixauth.service.hostname^\^":^\^"^{^{ugsync_host^}^}^\^",^\^"ranger.unixauth.service.port^\^":^\^"5151^\^"^},^\^"service_config_version_note^\^":^\^"Ranger^初^始^化^配^置^\^"^},^{^\^"type^\^":^\^"ranger-env^\^",^\^"properties^\^":^{^\^"admin_password^\^":^\^"Xjyb2024^\^",^\^"admin_username^\^":^\^"admin^\^",^\^"content^\^":^\^"^#^!/bin/bash^\^\n^\^\n^# Licensed to the Apache Software Foundation (ASF) under one^\^\n^# or more contributor license agreements. See the NOTICE file^\^\n^# distributed with this work for additional information^\^\n^# regarding copyright ownership. The ASF licenses this file^\^\n^# to you under the Apache License, Version 2.0 (the^\^\n^# ^\^\^\^"License^\^\^\^"); you may not use this file except in compliance^\^\n^# with the License. You may obtain a copy of the License at^\^\n^#^\^\n^# http://www.apache.org/licenses/LICENSE-2.0^\^\n^#^\^\n^# Unless required by applicable law or agreed to in writing, software^\^\n^# distributed under the License is distributed on an ^\^\^\^"AS IS^\^\^\^" BASIS,^\^\n^# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.^\^\n^# See the License for the specific language governing permissions and^\^\n^# limitations under the License.^\^\n^\^\n^# Set Ranger-specific environment variables here.^\^\nexport JAVA_HOME=^{^{java_home^}^}^\^\n^\^\n^{^% if is_ranger_admin_host ^%^}^\^\n^# Ranger Admin specific environment variables here.^\^\nexport RANGER_ADMIN_LOG_DIR=^{^{admin_log_dir^}^}^\^\nexport RANGER_PID_DIR_PATH=^{^{ranger_pid_dir^}^}^\^\nexport RANGER_USER=^{^{unix_user^}^}^\^\nranger_admin_max_heap_size=^{^{ranger_admin_max_heap_size^}^}^\^\n^{^% if security_enabled ^%^}^\^\nexport JAVA_OPTS=^\^\^\^" ^$^{JAVA_OPTS^} -Dzookeeper.sasl.client.username=^{^{zookeeper_principal_primary^}^} ^\^\^\^"^\^\n^{^% endif ^%^}^\^\n^{^% endif ^%^}^\^\n^\^\n^{^% if is_ranger_usersync_host ^%^}^\^\n^# Ranger Usersync specific environment variables here.^\^\nexport USERSYNC_CONF_DIR=^{^{ranger_ugsync_conf^}^}^\^\nexport logdir=^{^{usersync_log_dir^}^}^\^\nexport USERSYNC_PID_DIR_PATH=^{^{ranger_pid_dir^}^}^\^\nexport UNIX_USERSYNC_USER=^{^{unix_user^}^}^\^\nranger_usersync_max_heap_size=^{^{ranger_usersync_max_heap_size^}^}^\^\n^{^% endif ^%^}^\^\n^\^\n^{^% if is_ranger_tagsync_host ^%^}^\^\n^# Ranger Tagsync specific environment variables here.^\^\nexport RANGER_TAGSYNC_LOG_DIR=^{^{tagsync_log_dir^}^}^\^\nexport TAGSYNC_PID_DIR_PATH=^{^{ranger_pid_dir^}^}^\^\nexport UNIX_TAGSYNC_USER=^{^{unix_user^}^}^\^\nranger_tagsync_max_heap_size=^{^{ranger_tagsync_max_heap_size^}^}^\^\n^{^% endif ^%^}^\^",^\^"create_db_dbuser^\^":^\^"false^\^",^\^"is_external_solrCloud_enabled^\^":^\^"false^\^",^\^"is_external_solrCloud_kerberos^\^":^\^"false^\^",^\^"is_nested_groupsync_enabled^\^":^\^"false^\^",^\^"is_solrCloud_enabled^\^":^\^"false^\^",^\^"keyadmin_user_password^\^":^\^"Xjyb2024^\^",^\^"ranger-atlas-plugin-enabled^\^":^\^"No^\^",^\^"ranger-elasticsearch-plugin-enabled^\^":^\^"No^\^",^\^"ranger-hbase-plugin-enabled^\^":^\^"No^\^",^\^"ranger-hdfs-plugin-enabled^\^":^\^"No^\^",^\^"ranger-hive-plugin-enabled^\^":^\^"No^\^",^\^"ranger-kafka-plugin-enabled^\^":^\^"No^\^",^\^"ranger-knox-plugin-enabled^\^":^\^"No^\^",^\^"ranger-nifi-plugin-enabled^\^":^\^"No^\^",^\^"ranger-storm-plugin-enabled^\^":^\^"No^\^",^\^"ranger-yarn-plugin-enabled^\^":^\^"No^\^",^\^"ranger_admin_max_heap_size^\^":^\^"1g^\^",^\^"ranger_admin_password^\^":^\^"P1^!qFEvagmycLlTC^\^",^\^"ranger_admin_username^\^":^\^"amb_ranger_admin^\^",^\^"ranger_pid_dir^\^":^\^"/var/run/ranger^\^",^\^"ranger_privelege_user_jdbc_url^\^":^\^"jdbc:mysql://10.144.187.203:3306^\^",^\^"ranger_solr_collection_name^\^":^\^"ranger_audits^\^",^\^"ranger_solr_config_set^\^":^\^"ranger_audits^\^",^\^"ranger_solr_replication_factor^\^":^\^"1^\^",^\^"ranger_solr_shards^\^":^\^"1^\^",^\^"ranger_usersync_max_heap_size^\^":^\^"1g^\^",^\^"rangerusersync_user_password^\^":^\^"Xjyb2024^\^",^\^"xasecure.audit.destination.elasticsearch^\^":^\^"false^\^",^\^"xasecure.audit.destination.hdfs^\^":^\^"true^\^",^\^"xasecure.audit.destination.hdfs.dir^\^":^\^"hdfs://btyb/ranger/audit^\^",^\^"xasecure.audit.destination.solr^\^":^\^"false^\^",^\^"xml_configurations_supported^\^":^\^"true^\^",^\^"ranger_group^\^":^\^"ocdp^\^",^\^"ranger_user^\^":^\^"ocdp^\^"^},^\^"service_config_version_note^\^":^\^"Ranger^初^始^化^配^置^\^"^},^{^\^"type^\^":^\^"ranger-solr-configuration^\^",^\^"properties^\^":^{^\^"content^\^":^\^"^<?xml version=^\^\^\^"1.0^\^\^\^" encoding=^\^\^\^"UTF-8^\^\^\^" ?^>^\^\n^<^!--^\^\n Licensed to the Apache Software Foundation (ASF) under one or more^\^\n contributor license agreements. See the NOTICE file distributed with^\^\n this work for additional information regarding copyright ownership.^\^\n The ASF licenses this file to You under the Apache License, Version 2.0^\^\n (the ^\^\^\^"License^\^\^\^"); you may not use this file except in compliance with^\^\n the License. You may obtain a copy of the License at^\^\n^\^\n http://www.apache.org/licenses/LICENSE-2.0^\^\n^\^\n Unless required by applicable law or agreed to in writing, software^\^\n distributed under the License is distributed on an ^\^\^\^"AS IS^\^\^\^" BASIS,^\^\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.^\^\n See the License for the specific language governing permissions and^\^\n limitations under the License.^\^\n--^>^\^\n^\^\n^<^!--^\^\n For more details about configurations options that may appear in^\^\n this file, see http://wiki.apache.org/solr/SolrConfigXml.^\^\n--^>^\^\n^<config^>^\^\n ^<^!-- In all configuration below, a prefix of ^\^\^\^"solr.^\^\^\^" for class names^\^\n is an alias that causes solr to search appropriate packages,^\^\n including org.apache.solr.(search^|update^|request^|core^|analysis)^\^\n^\^\n You may also specify a fully qualified Java classname if you^\^\n have your own custom plugins.^\^\n --^>^\^\n^\^\n ^<^!-- Controls what version of Lucene various components of Solr^\^\n adhere to. Generally, you want to use the latest version to^\^\n get all bug fixes and improvements. It is highly recommended^\^\n that you fully re-index after changing this setting as it can^\^\n affect both how text is indexed and queried.^\^\n --^>^\^\n ^<luceneMatchVersion^>6.6.2^</luceneMatchVersion^>^\^\n^\^\n ^<^!-- ^<lib/^> directives can be used to instruct Solr to load any Jars^\^\n identified and use them to resolve any ^\^\^\^"plugins^\^\^\^" specified in^\^\n your solrconfig.xml or schema.xml (ie: Analyzers, Request^\^\n Handlers, etc...).^\^\n^\^\n All directories and paths are resolved relative to the^\^\n instanceDir.^\^\n^\^\n Please note that ^<lib/^> directives are processed in the order^\^\n that they appear in your solrconfig.xml file, and are ^\^\^\^"stacked^\^\^\^"^\^\n on top of each other when building a ClassLoader - so if you have^\^\n plugin jars with dependencies on other jars, the ^\^\^\^"lower level^\^\^\^"^\^\n dependency jars should be loaded first.^\^\n^\^\n If a ^\^\^\^"./lib^\^\^\^" directory exists in your instanceDir, all files^\^\n found in it are included as if you had used the following^\^\n syntax...^\^\n^\^\n ^<lib dir=^\^\^\^"./lib^\^\^\^" /^>^\^\n --^>^\^\n^\^\n ^<^!-- A 'dir' option by itself adds any files found in the directory^\^\n to the classpath, this is useful for including all jars in a^\^\n directory.^\^\n^\^\n When a 'regex' is specified in addition to a 'dir', only the^\^\n files in that directory which completely match the regex^\^\n (anchored on both ends) will be included.^\^\n^\^\n If a 'dir' option (with or without a regex) is used and nothing^\^\n is found that matches, a warning will be logged.^\^\n^\^\n The examples below can be used to load some solr-contribs along^\^\n with their external dependencies.^\^\n --^>^\^\n ^<lib dir=^\^\^\^"^$^{solr.install.dir:../../../..^}/dist/^\^\^\^" regex=^\^\^\^"solr-dataimporthandler-.*^\^\^\^\.jar^\^\^\^" /^>^\^\n^\^\n ^<lib dir=^\^\^\^"^$^{solr.install.dir:../../../..^}/contrib/extraction/lib^\^\^\^" regex=^\^\^\^".*^\^\^\^\.jar^\^\^\^" /^>^\^\n ^<lib dir=^\^\^\^"^$^{solr.install.dir:../../../..^}/dist/^\^\^\^" regex=^\^\^\^"solr-cell-^\^\^\^\d.*^\^\^\^\.jar^\^\^\^" /^>^\^\n^\^\n ^<lib dir=^\^\^\^"^$^{solr.install.dir:../../../..^}/contrib/clustering/lib/^\^\^\^" regex=^\^\^\^".*^\^\^\^\.jar^\^\^\^" /^>^\^\n ^<lib dir=^\^\^\^"^$^{solr.install.dir:../../../..^}/dist/^\^\^\^" regex=^\^\^\^"solr-clustering-^\^\^\^\d.*^\^\^\^\.jar^\^\^\^" /^>^\^\n^\^\n ^<lib dir=^\^\^\^"^$^{solr.install.dir:../../../..^}/contrib/langid/lib/^\^\^\^" regex=^\^\^\^".*^\^\^\^\.jar^\^\^\^" /^>^\^\n ^<lib dir=^\^\^\^"^$^{solr.install.dir:../../../..^}/dist/^\^\^\^" regex=^\^\^\^"solr-langid-^\^\^\^\d.*^\^\^\^\.jar^\^\^\^" /^>^\^\n^\^\n ^<lib dir=^\^\^\^"^$^{solr.install.dir:../../../..^}/contrib/velocity/lib^\^\^\^" regex=^\^\^\^".*^\^\^\^\.jar^\^\^\^" /^>^\^\n ^<lib dir=^\^\^\^"^$^{solr.install.dir:../../../..^}/dist/^\^\^\^" regex=^\^\^\^"solr-velocity-^\^\^\^\d.*^\^\^\^\.jar^\^\^\^" /^>^\^\n^\^\n ^<^!-- an exact 'path' can be used instead of a 'dir' to specify a^\^\n specific jar file. This will cause a serious error to be logged^\^\n if it can't be loaded.^\^\n --^>^\^\n ^<^!--^\^\n ^<lib path=^\^\^\^"../a-jar-that-does-not-exist.jar^\^\^\^" /^>^\^\n --^>^\^\n^\^\n ^<^!-- Data Directory^\^\n^\^\n Used to specify an alternate directory to hold all index data^\^\n other than the default ./data under the Solr home. If^\^\n replication is in use, this should match the replication^\^\n configuration.^\^\n --^>^\^\n ^<dataDir^>^$^{solr.data.dir:^}^</dataDir^>^\^\n^\^\n^\^\n ^<^!-- The DirectoryFactory to use for indexes.^\^\n^\^\n solr.StandardDirectoryFactory is filesystem^\^\n based and tries to pick the best implementation for the current^\^\n JVM and platform. solr.NRTCachingDirectoryFactory, the default,^\^\n wraps solr.StandardDirectoryFactory and caches small files in memory^\^\n for better NRT performance.^\^\n^\^\n One can force a particular implementation via solr.MMapDirectoryFactory,^\^\n solr.NIOFSDirectoryFactory, or solr.SimpleFSDirectoryFactory.^\^\n^\^\n solr.RAMDirectoryFactory is memory based, not^\^\n persistent, and doesn't work with replication.^\^\n --^>^\^\n ^<directoryFactory name=^\^\^\^"DirectoryFactory^\^\^\^"^\^\n class=^\^\^\^"^$^{solr.directoryFactory:solr.NRTCachingDirectoryFactory^}^\^\^\^"^>^\^\n^\^\n^\^\n ^<^!-- These will be used if you are using the solr.HdfsDirectoryFactory,^\^\n otherwise they will be ignored. If you don't plan on using hdfs,^\^\n you can safely remove this section. --^>^\^\n ^<^!-- The root directory that collection data should be written to. --^>^\^\n ^<str name=^\^\^\^"solr.hdfs.home^\^\^\^"^>^$^{solr.hdfs.home:^}^</str^>^\^\n ^<^!-- The hadoop configuration files to use for the hdfs client. --^>^\^\n ^<str name=^\^\^\^"solr.hdfs.confdir^\^\^\^"^>^$^{solr.hdfs.confdir:^}^</str^>^\^\n ^<^!-- Enable/Disable the hdfs cache. --^>^\^\n ^<str name=^\^\^\^"solr.hdfs.blockcache.enabled^\^\^\^"^>^$^{solr.hdfs.blockcache.enabled:true^}^</str^>^\^\n ^<^!-- Enable/Disable using one global cache for all SolrCores.^\^\n The settings used will be from the first HdfsDirectoryFactory created. --^>^\^\n ^<str name=^\^\^\^"solr.hdfs.blockcache.global^\^\^\^"^>^$^{solr.hdfs.blockcache.global:true^}^</str^>^\^\n^\^\n ^</directoryFactory^>^\^\n^\^\n ^<^!-- The CodecFactory for defining the format of the inverted index.^\^\n The default implementation is SchemaCodecFactory, which is the official Lucene^\^\n index format, but hooks into the schema to provide per-field customization of^\^\n the postings lists and per-document values in the fieldType element^\^\n (postingsFormat/docValuesFormat). Note that most of the alternative implementations^\^\n are experimental, so if you choose to customize the index format, it's a good^\^\n idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)^\^\n before upgrading to a newer version to avoid unnecessary reindexing.^\^\n --^>^\^\n ^<codecFactory class=^\^\^\^"solr.SchemaCodecFactory^\^\^\^"/^>^\^\n^\^\n ^<^!-- To enable dynamic schema REST APIs, use the following for ^<schemaFactory^>: --^>^\^\n^\^\n ^<schemaFactory class=^\^\^\^"ManagedIndexSchemaFactory^\^\^\^"^>^\^\n ^<bool name=^\^\^\^"mutable^\^\^\^"^>true^</bool^>^\^\n ^<str name=^\^\^\^"managedSchemaResourceName^\^\^\^"^>managed-schema^</str^>^\^\n ^</schemaFactory^>^\^\n^<^!--^\^\n When ManagedIndexSchemaFactory is specified, Solr will load the schema from^\^\n the resource named in 'managedSchemaResourceName', rather than from schema.xml.^\^\n Note that the managed schema resource CANNOT be named schema.xml. If the managed^\^\n schema does not exist, Solr will create it after reading schema.xml, then rename^\^\n 'schema.xml' to 'schema.xml.bak'.^\^\n^\^\n Do NOT hand edit the managed schema - external modifications will be ignored and^\^\n overwritten as a result of schema modification REST API calls.^\^\n^\^\n When ManagedIndexSchemaFactory is specified with mutable = true, schema^\^\n modification REST API calls will be allowed; otherwise, error responses will be^\^\n sent back for these requests.^\^\n^\^\n ^<schemaFactory class=^\^\^\^"ClassicIndexSchemaFactory^\^\^\^"/^>^\^\n --^>^\^\n^\^\n ^<^!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^\^\n Index Config - These settings control low-level behavior of indexing^\^\n Most example settings here show the default value, but are commented^\^\n out, to more easily see where customizations have been made.^\^\n^\^\n Note: This replaces ^<indexDefaults^> and ^<mainIndex^> from older versions^\^\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --^>^\^\n ^<indexConfig^>^\^\n ^<^!-- maxFieldLength was removed in 4.0. To get similar behavior, include a^\^\n LimitTokenCountFilterFactory in your fieldType definition. E.g.^\^\n ^<filter class=^\^\^\^"solr.LimitTokenCountFilterFactory^\^\^\^" maxTokenCount=^\^\^\^"10000^\^\^\^"/^>^\^\n --^>^\^\n ^<^!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 --^>^\^\n ^<^!-- ^<writeLockTimeout^>1000^</writeLockTimeout^> --^>^\^\n^\^\n ^<^!-- The maximum number of simultaneous threads that may be^\^\n indexing documents at once in IndexWriter; if more than this^\^\n many threads arrive they will wait for others to finish.^\^\n Default in Solr/Lucene is 8. --^>^\^\n ^<^!-- ^<maxIndexingThreads^>8^</maxIndexingThreads^> --^>^\^\n^\^\n ^<^!-- Expert: Enabling compound file will use less files for the index,^\^\n using fewer file descriptors on the expense of performance decrease.^\^\n Default in Lucene is ^\^\^\^"true^\^\^\^". Default in Solr is ^\^\^\^"false^\^\^\^" (since 3.6) --^>^\^\n ^<^!-- ^<useCompoundFile^>false^</useCompoundFile^> --^>^\^\n^\^\n ^<^!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene^\^\n indexing for buffering added documents and deletions before they are^\^\n flushed to the Directory.^\^\n maxBufferedDocs sets a limit on the number of documents buffered^\^\n before flushing.^\^\n If both ramBufferSizeMB and maxBufferedDocs is set, then^\^\n Lucene will flush based on whichever limit is hit first.^\^\n The default is 100 MB. --^>^\^\n ^<^!-- ^<ramBufferSizeMB^>100^</ramBufferSizeMB^> --^>^\^\n ^<^!-- ^<maxBufferedDocs^>1000^</maxBufferedDocs^> --^>^\^\n^\^\n ^<^!-- Expert: Merge Policy^\^\n The Merge Policy in Lucene controls how merging of segments is done.^\^\n The default since Solr/Lucene 3.3 is TieredMergePolicy.^\^\n The default since Lucene 2.3 was the LogByteSizeMergePolicy,^\^\n Even older versions of Lucene used LogDocMergePolicy.^\^\n --^>^\^\n ^<^!--^\^\n ^<mergePolicy class=^\^\^\^"org.apache.lucene.index.TieredMergePolicy^\^\^\^"^>^\^\n ^<int name=^\^\^\^"maxMergeAtOnce^\^\^\^"^>10^</int^>^\^\n ^<int name=^\^\^\^"segmentsPerTier^\^\^\^"^>10^</int^>^\^\n ^</mergePolicy^>^\^\n --^>^\^\n^\^\n ^<^!-- Merge Factor^\^\n The merge factor controls how many segments will get merged at a time.^\^\n For TieredMergePolicy, mergeFactor is a convenience parameter which^\^\n will set both MaxMergeAtOnce and SegmentsPerTier at once.^\^\n For LogByteSizeMergePolicy, mergeFactor decides how many new segments^\^\n will be allowed before they are merged into one.^\^\n Default is 10 for both merge policies.^\^\n --^>^\^\n ^<^!--^\^\n ^<mergeFactor^>10^</mergeFactor^>^\^\n --^>^\^\n^\^\n ^<^!-- Ranger customization. Set to 5 to trigger purging of deleted documents more often --^>^\^\n ^<mergePolicyFactory class=^\^\^\^"org.apache.solr.index.TieredMergePolicyFactory^\^\^\^"^>^\^\n ^<int name=^\^\^\^"maxMergeAtOnce^\^\^\^"^>^{^{ranger_audit_logs_merge_factor^}^}^</int^>^\^\n ^<int name=^\^\^\^"segmentsPerTier^\^\^\^"^>^{^{ranger_audit_logs_merge_factor^}^}^</int^>^\^\n ^</mergePolicyFactory^>^\^\n^\^\n ^<^!-- Expert: Merge Scheduler^\^\n The Merge Scheduler in Lucene controls how merges are^\^\n performed. The ConcurrentMergeScheduler (Lucene 2.3 default)^\^\n can perform merges in the background using separate threads.^\^\n The SerialMergeScheduler (Lucene 2.2 default) does not.^\^\n --^>^\^\n ^<^!--^\^\n ^<mergeScheduler class=^\^\^\^"org.apache.lucene.index.ConcurrentMergeScheduler^\^\^\^"/^>^\^\n --^>^\^\n^\^\n ^<^!-- LockFactory^\^\n^\^\n This option specifies which Lucene LockFactory implementation^\^\n to use.^\^\n^\^\n single = SingleInstanceLockFactory - suggested for a^\^\n read-only index or when there is no possibility of^\^\n another process trying to modify the index.^\^\n native = NativeFSLockFactory - uses OS native file locking.^\^\n Do not use when multiple solr webapps in the same^\^\n JVM are attempting to share a single index.^\^\n simple = SimpleFSLockFactory - uses a plain file for locking^\^\n^\^\n Defaults: 'native' is default for Solr3.6 and later, otherwise^\^\n 'simple' is the default^\^\n^\^\n More details on the nuances of each LockFactory...^\^\n http://wiki.apache.org/lucene-java/AvailableLockFactories^\^\n --^>^\^\n ^<lockType^>^$^{solr.lock.type:native^}^</lockType^>^\^\n^\^\n ^<^!-- Unlock On Startup^\^\n^\^\n If true, unlock any held write or commit locks on startup.^\^\n This defeats the locking mechanism that allows multiple^\^\n processes to safely access a lucene index, and should be used^\^\n with care. Default is ^\^\^\^"false^\^\^\^".^\^\n^\^\n This is not needed if lock type is 'single'^\^\n --^>^\^\n ^<^!--^\^\n ^<unlockOnStartup^>false^</unlockOnStartup^>^\^\n --^>^\^\n^\^\n ^<^!-- Commit Deletion Policy^\^\n Custom deletion policies can be specified here. The class must^\^\n implement org.apache.lucene.index.IndexDeletionPolicy.^\^\n^\^\n The default Solr IndexDeletionPolicy implementation supports^\^\n deleting index commit points on number of commits, age of^\^\n commit point and optimized status.^\^\n^\^\n The latest commit point should always be preserved regardless^\^\n of the criteria.^\^\n --^>^\^\n ^<^!--^\^\n ^<deletionPolicy class=^\^\^\^"solr.SolrDeletionPolicy^\^\^\^"^>^\^\n --^>^\^\n ^<^!-- The number of commit points to be kept --^>^\^\n ^<^!-- ^<str name=^\^\^\^"maxCommitsToKeep^\^\^\^"^>1^</str^> --^>^\^\n ^<^!-- The number of optimized commit points to be kept --^>^\^\n ^<^!-- ^<str name=^\^\^\^"maxOptimizedCommitsToKeep^\^\^\^"^>0^</str^> --^>^\^\n ^<^!--^\^\n Delete all commit points once they have reached the given age.^\^\n Supports DateMathParser syntax e.g.^\^\n --^>^\^\n ^<^!--^\^\n ^<str name=^\^\^\^"maxCommitAge^\^\^\^"^>30MINUTES^</str^>^\^\n ^<str name=^\^\^\^"maxCommitAge^\^\^\^"^>1DAY^</str^>^\^\n --^>^\^\n ^<^!--^\^\n ^</deletionPolicy^>^\^\n --^>^\^\n^\^\n ^<^!-- Lucene Infostream^\^\n^\^\n To aid in advanced debugging, Lucene provides an ^\^\^\^"InfoStream^\^\^\^"^\^\n of detailed information when indexing.^\^\n^\^\n Setting the value to true will instruct the underlying Lucene^\^\n IndexWriter to write its info stream to solr's log. By default,^\^\n this is enabled here, and controlled through log4j.properties.^\^\n --^>^\^\n ^<infoStream^>true^</infoStream^>^\^\n ^</indexConfig^>^\^\n^\^\n^\^\n ^<^!-- JMX^\^\n^\^\n This example enables JMX if and only if an existing MBeanServer^\^\n is found, use this if you want to configure JMX through JVM^\^\n parameters. Remove this to disable exposing Solr configuration^\^\n and statistics to JMX.^\^\n^\^\n For more details see http://wiki.apache.org/solr/SolrJmx^\^\n --^>^\^\n ^<jmx /^>^\^\n ^<^!-- If you want to connect to a particular server, specify the^\^\n agentId^\^\n --^>^\^\n ^<^!-- ^<jmx agentId=^\^\^\^"myAgent^\^\^\^" /^> --^>^\^\n ^<^!-- If you want to start a new MBeanServer, specify the serviceUrl --^>^\^\n ^<^!-- ^<jmx serviceUrl=^\^\^\^"service:jmx:rmi:///jndi/rmi://localhost:9999/solr^\^\^\^"/^>^\^\n --^>^\^\n^\^\n ^<^!-- The default high-performance update handler --^>^\^\n ^<updateHandler class=^\^\^\^"solr.DirectUpdateHandler2^\^\^\^"^>^\^\n^\^\n ^<^!-- Enables a transaction log, used for real-time get, durability, and^\^\n and solr cloud replica recovery. The log can grow as big as^\^\n uncommitted changes to the index, so use of a hard autoCommit^\^\n is recommended (see below).^\^\n ^\^\^\^"dir^\^\^\^" - the target directory for transaction logs, defaults to the^\^\n solr data directory. --^>^\^\n ^<updateLog^>^\^\n ^<str name=^\^\^\^"dir^\^\^\^"^>^$^{solr.ulog.dir:^}^</str^>^\^\n ^</updateLog^>^\^\n^\^\n ^<^!-- AutoCommit^\^\n^\^\n Perform a
......
^<str name=^\^\^\^"hl.tag.post^\^\^\^"^>^<^!^[CDATA^[^</b^>^]^]^>^</str^>^\^\n ^</lst^>^\^\n ^</fragmentsBuilder^>^\^\n^\^\n ^<boundaryScanner name=^\^\^\^"default^\^\^\^"^\^\n default=^\^\^\^"true^\^\^\^"^\^\n class=^\^\^\^"solr.highlight.SimpleBoundaryScanner^\^\^\^"^>^\^\n ^<lst name=^\^\^\^"defaults^\^\^\^"^>^\^\n ^<str name=^\^\^\^"hl.bs.maxScan^\^\^\^"^>10^</str^>^\^\n ^<str name=^\^\^\^"hl.bs.chars^\^\^\^"^>.,^!? &^#9;&^#10;&^#13;^</str^>^\^\n ^</lst^>^\^\n ^</boundaryScanner^>^\^\n^\^\n ^<boundaryScanner name=^\^\^\^"breakIterator^\^\^\^"^\^\n class=^\^\^\^"solr.highlight.BreakIteratorBoundaryScanner^\^\^\^"^>^\^\n ^<lst name=^\^\^\^"defaults^\^\^\^"^>^\^\n ^<^!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE --^>^\^\n ^<str name=^\^\^\^"hl.bs.type^\^\^\^"^>WORD^</str^>^\^\n ^<^!-- language and country are used when constructing Locale object. --^>^\^\n ^<^!-- And the Locale object will be used when getting instance of BreakIterator --^>^\^\n ^<str name=^\^\^\^"hl.bs.language^\^\^\^"^>en^</str^>^\^\n ^<str name=^\^\^\^"hl.bs.country^\^\^\^"^>US^</str^>^\^\n ^</lst^>^\^\n ^</boundaryScanner^>^\^\n ^</highlighting^>^\^\n ^</searchComponent^>^\^\n^\^\n ^<^!-- Update Processors^\^\n^\^\n Chains of Update Processor Factories for dealing with Update^\^\n Requests can be declared, and then used by name in Update^\^\n Request Processors^\^\n^\^\n http://wiki.apache.org/solr/UpdateRequestProcessor^\^\n^\^\n --^>^\^\n^\^\n ^<^!-- Add unknown fields to the schema^\^\n^\^\n An example field type guessing update processor that will^\^\n attempt to parse string-typed field values as Booleans, Longs,^\^\n Doubles, or Dates, and then add schema fields with the guessed^\^\n field types.^\^\n^\^\n This requires that the schema is both managed and mutable, by^\^\n declaring schemaFactory as ManagedIndexSchemaFactory, with^\^\n mutable specified as true.^\^\n^\^\n See http://wiki.apache.org/solr/GuessingFieldTypes^\^\n --^>^\^\n ^<updateRequestProcessorChain name=^\^\^\^"add-unknown-fields-to-the-schema^\^\^\^"^>^\^\n ^<processor class=^\^\^\^"solr.DefaultValueUpdateProcessorFactory^\^\^\^"^>^\^\n ^<str name=^\^\^\^"fieldName^\^\^\^"^>_ttl_^</str^>^\^\n ^<str name=^\^\^\^"value^\^\^\^"^>+^{^{ranger_audit_max_retention_days^}^}DAYS^</str^>^\^\n ^</processor^>^\^\n ^<processor class=^\^\^\^"solr.processor.DocExpirationUpdateProcessorFactory^\^\^\^"^>^\^\n ^<int name=^\^\^\^"autoDeletePeriodSeconds^\^\^\^"^>86400^</int^>^\^\n ^<str name=^\^\^\^"ttlFieldName^\^\^\^"^>_ttl_^</str^>^\^\n ^<str name=^\^\^\^"expirationFieldName^\^\^\^"^>_expire_at_^</str^>^\^\n ^</processor^>^\^\n ^<processor class=^\^\^\^"solr.FirstFieldValueUpdateProcessorFactory^\^\^\^"^>^\^\n ^<str name=^\^\^\^"fieldName^\^\^\^"^>_expire_at_^</str^>^\^\n ^</processor^>^\^\n^\^\n ^<processor class=^\^\^\^"solr.RemoveBlankFieldUpdateProcessorFactory^\^\^\^"/^>^\^\n ^<processor class=^\^\^\^"solr.ParseBooleanFieldUpdateProcessorFactory^\^\^\^"/^>^\^\n ^<processor class=^\^\^\^"solr.ParseLongFieldUpdateProcessorFactory^\^\^\^"/^>^\^\n ^<processor class=^\^\^\^"solr.ParseDoubleFieldUpdateProcessorFactory^\^\^\^"/^>^\^\n ^<processor class=^\^\^\^"solr.ParseDateFieldUpdateProcessorFactory^\^\^\^"^>^\^\n ^<arr name=^\^\^\^"format^\^\^\^"^>^\^\n ^<str^>yyyy-MM-dd'T'HH:mm:ss.SSSZ^</str^>^\^\n ^<str^>yyyy-MM-dd'T'HH:mm:ss,SSSZ^</str^>^\^\n ^<str^>yyyy-MM-dd'T'HH:mm:ss.SSS^</str^>^\^\n ^<str^>yyyy-MM-dd'T'HH:mm:ss,SSS^</str^>^\^\n ^<str^>yyyy-MM-dd'T'HH:mm:ssZ^</str^>^\^\n ^<str^>yyyy-MM-dd'T'HH:mm:ss^</str^>^\^\n ^<str^>yyyy-MM-dd'T'HH:mmZ^</str^>^\^\n ^<str^>yyyy-MM-dd'T'HH:mm^</str^>^\^\n ^<str^>yyyy-MM-dd HH:mm:ss.SSSZ^</str^>^\^\n ^<str^>yyyy-MM-dd HH:mm:ss,SSSZ^</str^>^\^\n ^<str^>yyyy-MM-dd HH:mm:ss.SSS^</str^>^\^\n ^<str^>yyyy-MM-dd HH:mm:ss,SSS^</str^>^\^\n ^<str^>yyyy-MM-dd HH:mm:ssZ^</str^>^\^\n ^<str^>yyyy-MM-dd HH:mm:ss^</str^>^\^\n ^<str^>yyyy-MM-dd HH:mmZ^</str^>^\^\n ^<str^>yyyy-MM-dd HH:mm^</str^>^\^\n ^<str^>yyyy-MM-dd^</str^>^\^\n ^</arr^>^\^\n ^</processor^>^\^\n ^<processor class=^\^\^\^"solr.LogUpdateProcessorFactory^\^\^\^"/^>^\^\n ^<processor class=^\^\^\^"solr.RunUpdateProcessorFactory^\^\^\^"/^>^\^\n ^</updateRequestProcessorChain^>^\^\n^\^\n^\^\n ^<^!-- Deduplication^\^\n^\^\n An example dedup update processor that creates the ^\^\^\^"id^\^\^\^" field^\^\n on the fly based on the hash code of some other fields. This^\^\n example has overwriteDupes set to false since we are using the^\^\n id field as the signatureField and Solr
--data-raw中有大量的注释内容,amabri的ranger配置文件初始化是用的/var/lib/ambari-server/resources/stacks/DIF/3.0/services/RANGER/properties/ranger-solrconfig.xml.j2,发现其中注释内容太多了,尝试删除注释内容,重新安装成功 传的http包太大了,导致传输超时
解决
在安装 Ranger 时遭遇空白错误,可能是由于 HTTP 请求包太大导致传输超时。这个问题通常出现在 Ranger 配置文件中包含过多的注释内容时。
-
查找并修改 Ranger 配置文件:
- 修改
/var/lib/ambari-server/resources/stacks/DIF/3.0/services/RANGER/properties/ranger-solrconfig.xml.j2并打开文件:
sudo vim /var/lib/ambari-server/resources/stacks/DIF/3.0/services/RANGER/properties/ranger-solrconfig.xml.j2- 在文件中删除不必要的注释内容,以减少文件大小。
- 修改
-
同步修改 Agent 端的文件:
- 同样修改
/var/lib/ambari-agent/cache/stacks/DIF/3.0/services/RANGER/properties/ranger-solrconfig.xml.j2并删除注释。
sudo vim /var/lib/ambari-agent/cache/stacks/DIF/3.0/services/RANGER/properties/ranger-solrconfig.xml.j2 - 同样修改
-
重新尝试安装 Ranger:
sudo ambari-server restart sudo ambari-agent restart
加载配置缓慢
调整系统参数可以提升网络连接能力,解决配置加载缓慢的问题。
-
查看当前
somaxconn设置:sudo sysctl -a | grep net.core.somaxconn -
临时修改
somaxconn的值:sudo sysctl -w net.core.somaxconn=32768 -
永久修改
somaxconn的值:- 编辑
/etc/sysctl.conf文件,在文件末增加一行:
sudo vim /etc/sysctl.confnet.core.somaxconn=32768- 并刷新修改:
sudo sysctl -p - 编辑
完整修改shell
#查看net.core.somaxconn连接
sysctl -a
#修改net.core.somaxconn
sysctl -w net.core.somaxconn=32768
#永久修改vim /etc/sysctl.conf 中增加一行
net.core.somaxconn=32768
ranger admin启动不了 数据库初始化错误
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-python-wrap /usr/hdp/current/ranger-admin/db_setup.py' returned 1. 2024-01-20 13:01:59,288 [I] DB FLAVOR :MYSQL
2024-01-20 13:01:59,288 [I] --------- Verifying Ranger DB connection ---------
2024-01-20 13:01:59,288 [I] Checking connection..
2024-01-20 13:01:59,288 [JISQL] /opt/jdk/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://10.144.187.203/ranger -u 'ranger' -p '********' -noheader -trim -c \; -query "select 1;"
Sat Jan 20 13:01:59 CST 2024 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
2024-01-20 13:01:59,875 [I] Checking connection passed.
解决
在 Ranger Admin 启动过程中,如果遇到数据库初始化错误,通常是因为 MySQL 配置问题。
-
修改 MySQL 配置:
- 连接到 MySQL:
mysql -u root -p- 设置全局参数以允许创建可信函数:
SET GLOBAL log_bin_trust_function_creators = 1; -
重建 Ranger 数据库:
- 删除现有的 Ranger 数据库并重新创建数据库:
DROP DATABASE ranger; CREATE DATABASE ranger; -
重新尝试启动 Ranger Admin:
- 确保在重新运行的过程中,新的配置能够生效。
sudo ambari-server restart sudo ambari-agent restart sudo ambari-server setup
总结
通过删除冗余注释、调整系统参数和修改 MySQL 配置,你可以解决 Ambari 安装 Ranger 过程中遇到的空白错误与数据库初始化问题,确保顺利安装和启动 Ranger。如果问题仍然存在,请检查相关日志并参考官方文档获取更多帮助。