adplus-dvertising
frame-decoration

Question

Spark was initially started by ____________ at UC Berkeley AMPLab in 2009.

a.

Mahek Zaharia

b.

Matei Zaharia

c.

Doug Cutting

d.

Stonebraker

Posted under Hadoop Frameworks Hadoop

Answer: (b).Matei Zaharia

Engage with the Community - Add Your Comment

Confused About the Answer? Ask for Details Here.

Know the Explanation? Add it Here.

Q. Spark was initially started by ____________ at UC Berkeley AMPLab in 2009.

Similar Questions

Discover Related MCQs

Q. ____________ is a component on top of Spark Core.

Q. Spark SQL provides a domain-specific language to manipulate ___________ in Scala, Java, or Python.

Q. ______________ leverages Spark Core fast scheduling capability to perform streaming analytics.

Q. ____________ is a distributed machine learning framework on top of Spark.

Q. ________ is a distributed graph processing framework on top of Spark.

Q. GraphX provides an API for expressing graph computation that can model the __________ abstraction.

Q. Spark architecture is ___________ times as fast as Hadoop disk-based Apache Mahout and even scales better than Vowpal Wabbit.

Q. Users can easily run Spark on top of Amazon’s __________

Q. Spark runs on top of ___________ a cluster manager system which provides efficient resource isolation across distributed applications.

Q. Which of the following can be used to launch Spark jobs inside MapReduce?

Q. Which of the following language is not supported by Spark?

Q. Spark is packaged with higher level libraries, including support for _________ queries.

Q. Spark includes a collection over ________ operators for transforming data and familiar data frame APIs for manipulating semi-structured data.

Q. Spark is engineered from the bottom-up for performance, running ___________ faster than Hadoop by exploiting in memory computing and other optimizations.

Q. Spark powers a stack of high-level tools including Spark SQL, MLlib for _________

Q. Apache Flume 1.3.0 is the fourth release under the auspices of Apache of the so-called ________ codeline.

Q. ___________ was created to allow you to flow data from a source into your Hadoop environment.

Q. A ____________ is an operation on the stream that can transform the stream.

Q. A number of ____________ source adapters give you the granular control to grab a specific file.

Q. ____________ is used when you want the sink to be the input source for another operation.