本文共 3345 字,大约阅读时间需要 11 分钟。
spark在idea中本地如何运行?
前几天尝试使用idea在本地运行spark+scala的程序,出现了问题, 当时还以为是本地spark安装问题,今天发现原来不是。记录如下:
使用pom写了一个程序,发现出现下面的错误
17/10/12 17:09:43 INFO storage.DiskBlockManager: Created local directory at /private/var/folders/bv/0tp4dw1n5tl9cxpc6dg2jy180000gp/T/blockmgr-0b0bf3cf-dd77-4bb4-97dc-60d6a65a35aeException in thread "main" java.lang.ExceptionInInitializerError at org.apache.spark.storage.DiskBlockManager.addShutdownHook(DiskBlockManager.scala:147) at org.apache.spark.storage.DiskBlockManager.(DiskBlockManager.scala:54) at org.apache.spark.storage.BlockManager. (BlockManager.scala:78) at org.apache.spark.SparkEnv$.create(SparkEnv.scala:365) at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193) at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288) at org.apache.spark.SparkContext. (SparkContext.scala:457) at com.didichuxing.scala.BenchMarkMain$.main(BenchMarkMain.scala:21) at com.didichuxing.scala.BenchMarkMain.main(BenchMarkMain.scala)Caused by: java.lang.NoSuchFieldException: SHUTDOWN_HOOK_PRIORITY at java.lang.Class.getField(Class.java:1695) at org.apache.spark.util.SparkShutdownHookManager.install(ShutdownHookManager.scala:223) at org.apache.spark.util.ShutdownHookManager$.shutdownHooks$lzycompute(ShutdownHookManager.scala:50) at org.apache.spark.util.ShutdownHookManager$.shutdownHooks(ShutdownHookManager.scala:48) at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191) at org.apache.spark.util.ShutdownHookManager$. (ShutdownHookManager.scala:58) at org.apache.spark.util.ShutdownHookManager$. (ShutdownHookManager.scala) ... 9 more
之前以为是我本地没有安装spark的问题。后来我的同事使用eclipse可以在本地运行一个spark的程序。于是反思是不是我的项目问题。
看这篇文章https://support.datastax.com/hc/en-us/articles/207038146-DSE-Spark-job-initialisation-returns-java-lang-NoSuchFieldException-SHUTDOWN-HOOK-PRIORITY-
说的是classPath里面的Hadoop的jar包不要使用2.x的,需要使用内置的jar。
打印了classPath,把 /Users/yejianfeng/.m2/repository/org/apache 里面的hadoop文件夹改名了
看了下源码,大概是说在/Users/yejianfeng/.m2/repository/org/apache/spark/spark-core_2.10/1.6.3/spark-core_2.10-1.6.3-sources.jar!/org/apache/spark/util/ShutdownHookManager.scala
下面有个代码:
Try(Utils.classForName("org.apache.hadoop.util.ShutdownHookManager")) match { case Success(shmClass) => val fsPriority = classOf[FileSystem] .getField("SHUTDOWN_HOOK_PRIORITY") .get(null) // static field, the value is not used .asInstanceOf[Int] val shm = shmClass.getMethod("get").invoke(null) shm.getClass().getMethod("addShutdownHook", classOf[Runnable], classOf[Int]) .invoke(shm, hookTask, Integer.valueOf(fsPriority + 30)) case Failure(_) => Runtime.getRuntime.addShutdownHook(new Thread(hookTask, "Spark Shutdown Hook")); }
里面先获取FileSystem,然后再获取FileSystem的SHUTDOWN_HOOK_PRIORITY属性,而这个属性在当前的FileSystem中并不存在。看起来是个版本问题,而且是org.apache.hadoop.fs.FileSystem的版本问题。
发现我的FileSystem版本在pom里面已经设置的是2.7.1,查看了下源码,
public static final int SHUTDOWN_HOOK_PRIORITY = 10;里面有这个属性。
使用ide的提示,我发现我的FileSystem被两个引用了
很明显,hadoop-core只有到1.2.1 于是我就尝试把hadoop-core从我的pom中移除,并且从mvn仓库中移除。
问题解决
可以在本机运行spark读取本地文件了
本文转自轩脉刃博客园博客,原文链接:http://www.cnblogs.com/yjf512/p/7699881.html,如需转载请自行联系原作者