Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.lang.NoSuchMethodError: org.apache.hadoop.fs.FsTracer.get(Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/tracing/Tracer; #8038

Open
2 of 3 tasks
laixueyong opened this issue Nov 13, 2024 · 0 comments
Labels

Comments

@laixueyong
Copy link

Search before asking

  • I had searched in the issues and found no similar issues.

What happened

将任务成功提交到yarn后报java.lang.NoSuchMethodError: org.apache.hadoop.fs.FsTracer.get(Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/tracing/Tracer;
报错相关jar为 connector-milvus-2.3.8.jar和connector-file-obs-2.3.8.jar
使用相关hive的jar为
CDH6.3.2
hive-exec-2.1.1-cdh6.3.2.jar
hive-jdbc-2.1.1-cdh6.3.2.jar
hive-metastore-2.1.1-cdh6.3.2.jar
hive-service-2.1.1-cdh6.3.2.jar
libthrift-0.9.3-1.jar
mysql-connector-java-8.0.28.jar

SeaTunnel Version

apache-seatunnel-2.3.8

SeaTunnel Config

# Defining the runtime environment
env {
  parallelism = 2
  job.mode = "BATCH"
}
source{
    Jdbc {
        url = "jdbc:hive2://cdhtest05:10000/default;principal=hive/[email protected]"
        driver = "org.apache.hive.jdbc.HiveDriver"
        connection_check_timeout_sec = 100
        query = "SELECT sku, sku_name, com_name FROM px2 WHERE sku_name = 'Product A'"
        useKerberos = true
        kerberos_keytab_path = "/home/extimp/lxy_work/datax_format/addax/kerberos/hive.keytab"
        kerberos_principal = "hive/[email protected]"
        krb5_path = "/etc/krb5.conf"
    }
}



sink {
    Console {}
}

Running Command

./bin/start-seatunnel-flink-13-connector-v2.sh --config ./jobs/hive_console.conf

Error Exception

2024-11-13 10:09:51,448 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: pipeline.jars, file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/starter/seatunnel-flink-13-starter.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/seatunnel-hadoop3-3.1.4-uber.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-activemq-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-amazondynamodb-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-amazonsqs-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-assert-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-cassandra-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-cdc-mongodb-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-cdc-mysql-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-cdc-opengauss-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-cdc-oracle-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-cdc-postgres-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-cdc-sqlserver-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-cdc-tidb-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-clickhouse-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-console-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-datahub-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-dingtalk-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-doris-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-druid-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-easysearch-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-elasticsearch-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-email-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-fake-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-file-ftp-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-file-hadoop-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-file-jindo-oss-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-file-local-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-file-obs-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-file-oss-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-file-s3-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-file-sftp-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-google-firestore-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-google-sheets-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-hbase-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-hive-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-base-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-feishu-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-github-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-gitlab-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-jira-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-klaviyo-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-lemlist-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-myhours-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-notion-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-onesignal-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-http-wechat-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-hudi-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-iceberg-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-influxdb-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-iotdb-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-jdbc-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-kafka-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-kudu-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-maxcompute-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-milvus-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-mongodb-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-neo4j-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-openmldb-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-paimon-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-pulsar-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-qdrant-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-rabbitmq-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-redis-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-rocketmq-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-s3-redshift-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-selectdb-cloud-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-sentry-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-slack-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-sls-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-socket-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-starrocks-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-tablestore-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-tdengine-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-typesense-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/connector-web3j-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/seatunnel-transforms-v2-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/mysql-connector-java-8.0.28.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/hive-jdbc-2.1.1-cdh6.3.2.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/libthrift-0.9.3-1.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/hive-service-2.1.1-cdh6.3.2.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/hive-common-2.1.1-cdh6.3.2.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/hive-exec-2.1.1-cdh6.3.2.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/lib/hive-metastore-2.1.1-cdh6.3.2.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/connectors/connector-jdbc-2.3.8.jar;file:/home/extimp/lxy_work/seatunnel/apache-seatunnel-2.3.8/connectors/connector-console-2.3.8.jar
2024-11-13 10:09:51,449 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: state.backend, filesystem
2024-11-13 10:09:51,449 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: security.kerberos.login.keytab, /home/extimp/hjx_work/flink-1.14.4/conf/hive.keytab
2024-11-13 10:09:51,449 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: $internal.deployment.config-dir, /home/extimp/hjx_work/flink-1.14.4/conf
2024-11-13 10:09:51,449 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: $internal.yarn.log-config-file, /home/extimp/hjx_work/flink-1.14.4/conf/log4j.properties
2024-11-13 10:09:51,449 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: state.checkpoints.dir, hdfs://nameservice1:8020/flink-1.14.4-checkpoints
2024-11-13 10:09:51,458 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService             [] - Starting RPC endpoint for org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/rpc/jobmanager_1 .
2024-11-13 10:09:51,476 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Initializing job 'SeaTunnel' (5b7233e2bd7c8147c175a55205a090e1).
2024-11-13 10:09:51,498 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService             [] - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager at akka://flink/user/rpc/resourcemanager_2 .
2024-11-13 10:09:51,516 INFO  org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Starting the resource manager.
2024-11-13 10:09:51,530 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Using restart back off time strategy NoRestartBackoffTimeStrategy for SeaTunnel (5b7233e2bd7c8147c175a55205a090e1).
2024-11-13 10:09:51,559 INFO  org.apache.hadoop.yarn.client.RMProxy                        [] - Connecting to ResourceManager at cdhtest02/10.96.119.182:8030
2024-11-13 10:09:51,603 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Running initialization on master for job SeaTunnel (5b7233e2bd7c8147c175a55205a090e1).
2024-11-13 10:09:51,603 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Successfully ran initialization on master in 0 ms.
2024-11-13 10:09:51,722 INFO  org.apache.flink.runtime.scheduler.adapter.DefaultExecutionTopology [] - Built 2 pipelined regions in 2 ms
2024-11-13 10:09:51,766 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Using application-defined state backend: org.apache.flink.streaming.api.operators.sorted.state.BatchExecutionStateBackend@2e8dabcd
2024-11-13 10:09:51,767 INFO  org.apache.flink.runtime.state.StateBackendLoader            [] - State backend loader loads the state backend as BatchExecutionStateBackend
2024-11-13 10:09:51,770 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Using application defined checkpoint storage: org.apache.flink.streaming.api.operators.sorted.state.BatchExecutionCheckpointStorage@3c02310e
2024-11-13 10:09:51,792 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - No checkpoint found during restore.
2024-11-13 10:09:51,800 INFO  org.apache.flink.yarn.YarnResourceManagerDriver              [] - Recovered 0 containers from previous attempts ([]).
2024-11-13 10:09:51,801 INFO  org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Recovered 0 workers from previous attempt.
2024-11-13 10:09:51,802 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Using failover strategy org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy@7969b816 for SeaTunnel (5b7233e2bd7c8147c175a55205a090e1).
2024-11-13 10:09:51,821 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Starting execution of job 'SeaTunnel' (5b7233e2bd7c8147c175a55205a090e1) under job master id 00000000000000000000000000000000.
2024-11-13 10:09:51,823 INFO  org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Starting split enumerator for source Source: Jdbc-Source.
2024-11-13 10:09:51,832 INFO  org.apache.hadoop.conf.Configuration                         [] - resource-types.xml not found
2024-11-13 10:09:51,833 INFO  org.apache.hadoop.yarn.util.resource.ResourceUtils           [] - Unable to find 'resource-types.xml'.
2024-11-13 10:09:51,833 INFO  org.apache.seatunnel.connectors.seatunnel.jdbc.source.ChunkSplitter [] - Switch to dynamic chunk splitter
2024-11-13 10:09:51,840 INFO  org.apache.hadoop.yarn.util.resource.ResourceUtils           [] - Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
2024-11-13 10:09:51,840 INFO  org.apache.hadoop.yarn.util.resource.ResourceUtils           [] - Adding resource type - name = vcores, units = , type = COUNTABLE
2024-11-13 10:09:51,846 INFO  org.apache.flink.runtime.externalresource.ExternalResourceUtils [] - Enabled external resources: []
2024-11-13 10:09:51,851 INFO  org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl [] - Upper bound of the thread pool size is 500
2024-11-13 10:09:51,856 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Starting scheduling with scheduling strategy [org.apache.flink.runtime.scheduler.strategy.PipelinedRegionSchedulingStrategy]
2024-11-13 10:09:51,856 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Job SeaTunnel (5b7233e2bd7c8147c175a55205a090e1) switched from state CREATED to RUNNING.
2024-11-13 10:09:51,858 INFO  org.apache.seatunnel.api.event.LoggingEventHandler           [] - log event: EnumeratorOpenEvent(createdTime=1731463791857, jobId=5b7233e2bd7c8147c175a55205a090e1, eventType=LIFECYCLE_ENUMERATOR_OPEN)
2024-11-13 10:09:51,864 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: Jdbc-Source -> Sink MultiTableSink-Sink (1/2) (03c1bb62ebee66f3242e2bef00395de1) switched from CREATED to SCHEDULED.
2024-11-13 10:09:51,885 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: Jdbc-Source -> Sink MultiTableSink-Sink (2/2) (b778bbc73572f275c5f06b28af4de386) switched from CREATED to SCHEDULED.
2024-11-13 10:09:51,887 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Connecting to ResourceManager akka.tcp://flink@cdhtest06:37619/user/rpc/resourcemanager_*(00000000000000000000000000000000)
2024-11-13 10:09:51,892 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - Resolved ResourceManager address, beginning registration
2024-11-13 10:09:51,896 INFO  org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Registering job manager [email protected]://flink@cdhtest06:37619/user/rpc/jobmanager_1 for job 5b7233e2bd7c8147c175a55205a090e1.
2024-11-13 10:09:51,901 INFO  org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Registered job manager [email protected]://flink@cdhtest06:37619/user/rpc/jobmanager_1 for job 5b7233e2bd7c8147c175a55205a090e1.
2024-11-13 10:09:51,906 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - JobManager successfully registered at ResourceManager, leader id: 00000000000000000000000000000000.
2024-11-13 10:09:51,908 INFO  org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager [] - Received resource requirements from job 5b7233e2bd7c8147c175a55205a090e1: [ResourceRequirement{resourceProfile=ResourceProfile{UNKNOWN}, numberOfRequiredSlots=2}]
2024-11-13 10:09:51,915 INFO  org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Requesting new worker with resource spec WorkerResourceSpec {cpuCores=3.0, taskHeapSize=1.425gb (1530082070 bytes), taskOffHeapSize=0 bytes, networkMemSize=343.040mb (359703515 bytes), managedMemSize=1.340gb (1438814063 bytes), numSlots=3}, current pending count: 1.
2024-11-13 10:09:51,935 INFO  org.apache.flink.yarn.YarnResourceManagerDriver              [] - Requesting new TaskExecutor container with resource TaskExecutorProcessSpec {cpuCores=3.0, frameworkHeapSize=128.000mb (134217728 bytes), frameworkOffHeapSize=128.000mb (134217728 bytes), taskHeapSize=1.425gb (1530082070 bytes), taskOffHeapSize=0 bytes, networkMemSize=343.040mb (359703515 bytes), managedMemorySize=1.340gb (1438814063 bytes), jvmMetaspaceSize=256.000mb (268435456 bytes), jvmOverheadSize=409.600mb (429496736 bytes), numSlots=3}, priority 1.
2024-11-13 10:09:57,368 INFO  org.apache.flink.yarn.YarnResourceManagerDriver              [] - Received 1 containers.
2024-11-13 10:09:57,370 INFO  org.apache.flink.yarn.YarnResourceManagerDriver              [] - Received 1 containers with priority 1, 1 pending container requests.
2024-11-13 10:09:57,375 INFO  org.apache.flink.yarn.YarnResourceManagerDriver              [] - Removing container request Capability[<memory:4096, vCores:3>]Priority[1]AllocationRequestId[0]ExecutionTypeRequest[{Execution Type: GUARANTEED, Enforce Execution Type: false}].
2024-11-13 10:09:57,376 INFO  org.apache.flink.yarn.YarnResourceManagerDriver              [] - Accepted 1 requested containers, returned 0 excess containers, 0 pending container requests of resource <memory:4096, vCores:3>.
2024-11-13 10:09:57,376 INFO  org.apache.flink.yarn.YarnResourceManagerDriver              [] - TaskExecutor container_1690789750532_19239_01_000002(cdhtest01:8041) will be started on cdhtest01 with TaskExecutorProcessSpec {cpuCores=3.0, frameworkHeapSize=128.000mb (134217728 bytes), frameworkOffHeapSize=128.000mb (134217728 bytes), taskHeapSize=1.425gb (1530082070 bytes), taskOffHeapSize=0 bytes, networkMemSize=343.040mb (359703515 bytes), managedMemorySize=1.340gb (1438814063 bytes), jvmMetaspaceSize=256.000mb (268435456 bytes), jvmOverheadSize=409.600mb (429496736 bytes), numSlots=3}.
2024-11-13 10:09:57,380 INFO  org.apache.flink.yarn.YarnResourceManagerDriver              [] - TM:Adding keytab hdfs://nameservice1/user/hive/.flink/application_1690789750532_19239/hive.keytab to the container local resource bucket
2024-11-13 10:09:57,400 WARN  org.apache.hadoop.fs.FileSystem                              [] - Cannot load filesystem: java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider com.aliyun.emr.fs.oss.JindoOssFileSystem not found
2024-11-13 10:09:57,430 WARN  org.apache.hadoop.fs.FileSystem                              [] - Cannot load filesystem: java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem not found
2024-11-13 10:09:57,438 WARN  org.apache.hadoop.fs.FileSystem                              [] - Cannot load filesystem: java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem not found
2024-11-13 10:09:57,439 WARN  org.apache.hadoop.fs.FileSystem                              [] - Cannot load filesystem: java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.CosFileSystem not found
2024-11-13 10:09:57,574 WARN  org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Failed requesting worker with resource spec WorkerResourceSpec {cpuCores=3.0, taskHeapSize=1.425gb (1530082070 bytes), taskOffHeapSize=0 bytes, networkMemSize=343.040mb (359703515 bytes), managedMemSize=1.340gb (1438814063 bytes), numSlots=3}, current pending count: 0
java.util.concurrent.CompletionException: java.lang.NoSuchMethodError: org.apache.hadoop.fs.FsTracer.get(Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/tracing/Tracer;
	at org.apache.flink.util.concurrent.FutureUtils.lambda$supplyAsync$21(FutureUtils.java:1052) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) ~[?:1.8.0_111]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_111]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_111]
	at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_111]
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.fs.FsTracer.get(Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/tracing/Tracer;
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:323) ~[connector-milvus-2.3.8.jar:2.3.8]
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308) ~[connector-milvus-2.3.8.jar:2.3.8]
	at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:204) ~[connector-milvus-2.3.8.jar:2.3.8]
	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:189) ~[connector-milvus-2.3.8.jar:2.3.8]
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354) ~[connector-file-obs-2.3.8.jar:2.3.8]
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124) ~[connector-file-obs-2.3.8.jar:2.3.8]
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403) ~[connector-file-obs-2.3.8.jar:2.3.8]
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371) ~[connector-file-obs-2.3.8.jar:2.3.8]
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477) ~[connector-file-obs-2.3.8.jar:2.3.8]
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) ~[connector-file-obs-2.3.8.jar:2.3.8]
	at org.apache.flink.yarn.Utils.createTaskExecutorContext(Utils.java:442) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
	at org.apache.flink.yarn.YarnResourceManagerDriver.createTaskExecutorLaunchContext(YarnResourceManagerDriver.java:452) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
	at org.apache.flink.yarn.YarnResourceManagerDriver.lambda$startTaskExecutorInContainerAsync$1(YarnResourceManagerDriver.java:383) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
	at org.apache.flink.util.concurrent.FutureUtils.lambda$supplyAsync$21(FutureUtils.java:1050) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
	... 4 more

Zeta or Flink or Spark Version

No response

Java or Scala Version

No response

Screenshots

No response

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

@laixueyong laixueyong added the bug label Nov 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant