Spark-Notebook For Developer

Feb 08, 2016 Written by Ranjitha Platform Operations Engineer


Spark-notebook allows performing reproducible with scala,Apache Spark and more. This is achieved through an interactive web-based editor that can combine Scala code, SQL queries, Markup or even JavaScript in a collaborative manner.

  <a href="" target="_blank">
  <img src="" alt="wordpres button" /></a>

The Spark is available out of the box, and is simply accessed by the variable sparkContext.Spark Notebook offers these capabilities to anybody who needs to play with data, leveraging not only Spark for all data manipulation, but also the Typesafe Reactive Platform, to offer unique power to the user.


To get debian package install spark-notebook


Unzip the package in spark-notebook


To rename directory spark-notebook-0.6.0-scala-2.10.4-spark-1.4.1-hadoop-1.0.4 into spark-notebook

mv spark-notebook-0.6.0-scala-2.10.4-spark-1.4.1-hadoop-1.0.4 spark-notebook

Running Spark-notebook

To run the spark-notebook cd into spark-notebook


When the server has been started, fire up your browser and point it to localhost:9000 and you’ll see something similar to: Notebook list

From there you can either: create a new notebook or launch an existing notebook.

In UI, You can see the following menu files, Running, clusters and New. To create a new notebook click New option.

Click cluster tab, to create spark cluster open in another window. To write a program in the cell and click Run button. For example in the cell

In[]: import org.apache.spark._
      import org.apache.spark.SparkContext._
      import org.apache.spark.rdd._

Click run button.You will see the output in the cell.


The construction of models on a full dataset, not just subsets

The generation of deployable products to Mesos clusters The creation of Avro and Play/Akka HTTP powered web services that use the resulting dataset

The creation of repositories and indexes of the analyses and services

Deploy your Spark-Notebook now

wordpres button