Apache Storm MCQs and Answers With Explanation –Apache Storm is a distributed, fault-tolerant real-time computation system that enables fast and reliable processing of streaming data. It was developed by Nathan Marz and his team at BackType and was later acquired by the Apache Software Foundation in 2011. Thanks to its scalable architecture and easy-to-use API, Storm has become a popular choice for a wide range of real-time processing use cases, such as real-time analytics, machine learning, and ETL (extract, transform, load) operations. If you want to become an expert in Apache Storm, you should check out these Apache Storm multiple choice questions, which provide detailed explanations for each question and help you gain a thorough understanding of specific topics.
Apache Storm MCQs
In this article, we will be exploring the Top 55 Apache Storm MCQs with Answers that will help you assess your knowledge and deepen your understanding of this powerful technology. You can use this Apache Storm Quiz to know what type of Apache Storm Questions will be asked in the placement test or for an interview. And if you find these Apache Storm Questions and Answers useful make sure you follow us regularly to receive updates on various technical quizzes.
Apache Storm Multiple Choice Questions
Quiz Name | Apache Storm |
Exam Type | MCQ (Multiple Choice Questions) |
Category | Technical Quiz |
Mode of Quiz | Online |
Top 55 Apache Storm MCQs | Practice Online Quiz
1. What is Apache Storm?
a. A distributed real-time computation system
b. A database management system
c. An operating system
d. A data visualization tool
Answer: a. A distributed real-time computation system
Explanation: Apache Storm is a distributed real-time computation system that is used for processing large volumes of data in real-time.
2. What are the components of Apache Storm?
a. Nimbus, Supervisor, and Zookeeper
b. Hadoop, Hive, and Pig
c. Spark, Flink, and Kafka
d. Cassandra, HBase, and MongoDB
Answer: a. Nimbus, Supervisor, and Zookeeper
Explanation: Apache Storm consists of three main components: Nimbus, Supervisor, and Zookeeper. Nimbus is the master node in the cluster, Supervisor is the worker node that runs the tasks, and Zookeeper is used for coordination and synchronization.
3. What is a topology in Apache Storm?
a. A graphical representation of data flows and processing logic
b. A cluster of machines running Apache Storm
c. A data stream processed by Apache Storm
d. A database table used by Apache Storm
Answer: a. A graphical representation of data flows and processing logic
Explanation: In Apache Storm, a topology is a graphical representation of data flows and processing logic. It consists of a set of spouts and bolts that process data in parallel.
4. What is a spout in Apache Storm?
a. A component that generates data for processing
b. A component that processes data in parallel
c. A component that stores data in a database
d. A component that visualizes data
Answer: a. A component that generates data for processing
Explanation: In Apache Storm, a spout is a component that generates data for processing. It reads data from a data source, such as a message queue or a database, and emits it to the next component in the topology.
5. What is a bolt in Apache Storm?
a. A component that processes data in parallel
b. A component that generates data for processing
c. A component that stores data in a database
d. A component that visualizes data
Answer: a. A component that processes data in parallel
Explanation: In Apache Storm, a bolt is a component that processes data in parallel. It takes input from one or more spouts and emits output to one or more bolts or sinks.
6. What is a task in Apache Storm?
a. An instance of a spout or a bolt
b. A message sent between spouts and bolts
c. A log file generated by Apache Storm
d. A database table used by Apache Storm
Answer: a. An instance of a spout or a bolt
Explanation: In Apache Storm, a task is an instance of a spout or a bolt that is running on a worker node. Each task is responsible for processing a subset of the input data.
7. What is a worker in Apache Storm?
a. A process that runs one or more tasks
b. A process that runs the Nimbus component
c. A process that runs the Zookeeper component
d. A process that runs the Supervisor component
Answer: a. A process that runs one or more tasks
Explanation: In Apache Storm, a worker is a process that runs one or more tasks. It is responsible for executing the logic defined in the spouts and bolts.
8. What is the role of Nimbus in Apache Storm?
a. It is the master node that manages the cluster
b. It is a worker node that runs tasks
c. It is a component that generates data for processing
d. It is a component that processes data in parallel
Answer: a. It is the master node that manages the cluster
Explanation: Nimbus is the master node in Apache Storm that manages the cluster. It receives the topology from the user and distributes it to the workers in the cluster. It also monitors the health of the workers and restarts them if necessary.
9. What is the role of Supervisor in Apache Storm?
a. It is a worker node that runs tasks
b. It is the master node that manages the cluster
c. It is a component that generates data for processing
d. It is a component that processes data in parallel
Answer: a. It is a worker node that runs tasks
Explanation: Supervisor is the worker node in Apache Storm that runs tasks. It receives the topology from Nimbus and starts the tasks on the worker nodes.
10. What is the role of Zookeeper in Apache Storm?
a. It is used for coordination and synchronization
b. It is a worker node that runs tasks
c. It is a component that generates data for processing
d. It is a component that processes data in parallel
Answer: a. It is used for coordination and synchronization
Explanation: Zookeeper is used for coordination and synchronization in Apache Storm. It stores the state of the cluster and provides a distributed coordination service for the components in the cluster.
11. What is the difference between a spout and a bolt in Apache Storm?
a. A spout generates data, while a bolt processes data
b. A spout processes data, while a bolt generates data
c. A spout runs on a master node, while a bolt runs on a worker node
d. A spout runs in parallel, while a bolt runs sequentially
Answer: a. A spout generates data, while a bolt processes data
Explanation: In Apache Storm, a spout generates data from a data source, while a bolt processes the data in parallel. Spouts are the sources of data in a topology, while bolts perform the data processing logic.
12. What is a tuple in Apache Storm?
a. A unit of data processed by Apache Storm
b. A data source used by Apache Storm
c. A cluster of machines running Apache Storm
d. A data visualization tool
Answer: a. A unit of data processed by Apache Storm
Explanation: In Apache Storm, a tuple is a unit of data processed by the spouts and bolts in the topology. It contains one or more fields that represent the data being processed.
13. What is the difference between a tuple and a message in Apache Storm?
a. A tuple is processed by spouts and bolts, while a message is sent between them
b. A tuple is a unit of data, while a message is a notification or a request
c. A tuple contains fields, while a message contains metadata
d. A tuple is generated by spouts, while a message is generated by bolts
Answer: a. A tuple is processed by spouts and bolts, while a message is sent between them
Explanation: In Apache Storm, a tuple is a unit of data that is processed by the spouts and bolts in the topology, while a message is a notification or a request that is sent between the spouts and bolts.
14. What is a stream in Apache Storm?
a. A sequence of tuples processed by the topology
b. A sequence of messages sent between spouts and bolts
c. A cluster of machines running Apache Storm
d. A database table used by Apache Storm
Answer: a. A sequence of tuples processed by the topology
Explanation: In Apache Storm, a stream is a sequence of tuples processed by the topology. It represents the flow of data through the topology.
15. What is the difference between a stream and a spout in Apache Storm?
a. A stream is a sequence of tuples, while a spout is a component that generates the tuples
b. A stream generates tuples, while a spout processes them
c. A stream runs on a master node, while a spout runs on a worker node
d. A stream runs in parallel, while a spout runs sequentially
Answer: a. A stream is a sequence of tuples, while a spout is a component that generates the tuples
Explanation: In Apache Storm, a stream is a sequence of tuples that are processed by the topology, while a spout is a component that generates the tuples for the topology to process.
16. What is the difference between a stream and a bolt in Apache Storm?
a. A stream is a sequence of tuples, while a bolt processes the tuples
b. A stream generates tuples, while a bolt generates messages
c. A stream runs on a master node, while a bolt runs on a worker node
d. A stream runs in parallel, while a bolt runs sequentially
Answer: a. A stream is a sequence of tuples, while a bolt processes the tuples
Explanation: In Apache Storm, a stream is a sequence of tuples that are processed by the topology, while a bolt is a component that processes the tuples in parallel.
17. What is the role of ackers in Apache Storm?
a. They ensure reliable message processing in the topology
b. They monitor the health of the workers in the cluster
c. They distribute the topology to the workers in the cluster
d. They generate data for processing in the topology
Answer: a. They ensure reliable message processing in the topology
Explanation: Ackers in Apache Storm ensure reliable message processing in the topology. They track the tuples that have been processed by the bolts and send acknowledgements to the spouts, ensuring that the tuples are processed successfully.
18. What is the role of metrics in Apache Storm?
a. They provide statistics about the performance of the topology
b. They generate data for processing in the topology
c. They ensure reliable message processing in the topology
d. They monitor the health of the workers in the cluster
Answer: a. They provide statistics about the performance of the topology
Explanation: Metrics in Apache Storm provide statistics about the performance of the topology. They can be used to monitor the throughput, latency, and other performance metrics of the topology.
19. What is the role of DRPC in Apache Storm?
a. It provides a way to query the topology from external applications
b. It distributes the topology to the workers in the cluster
c. It generates data for processing in the topology
d. It ensures reliable message processing in the topology
Answer: a. It provides a way to query the topology from external applications
Explanation: DRPC in Apache Storm provides a way to query the topology from external applications. It allows external applications to submit requests to the topology and receive responses in real-time.
20. What is the role of Trident in Apache Storm?
a. It provides a high-level API for building topologies
b. It generates data for processing in the topology
c. It ensures reliable message processing in the topology
d. It monitors the health of the workers in the cluster
Answer: a. It provides a high-level API for building topologies
Explanation: Trident in Apache Storm provides a high-level API for building complex topologies. It simplifies the process of building topologies by providing operators and functions that can be easily combined to create complex processing logic.
21. What is the difference between a local mode and a distributed mode in Apache Storm?
a. In local mode, the topology runs on a single machine, while in distributed mode, it runs on a cluster of machines
b. In local mode, the topology runs on a cluster of machines, while in distributed mode, it runs on a single machine
c. In local mode, the topology runs in parallel, while in distributed mode, it runs sequentially
d. In local mode, the topology runs on the master node, while in distributed mode, it runs on the worker nodes
Answer: a. In local mode, the topology runs on a single machine, while in distributed mode, it runs on a cluster of machines
Explanation: In local mode, the topology runs on a single machine, which is typically the developer’s machine. This mode is useful for testing and debugging the topology. In distributed mode, the topology runs on a cluster of machines, which allows it to process large volumes of data in parallel.
22. What is the role of the Nimbus node in Apache Storm?
a. It manages the topology and distributes it to the worker nodes
b. It processes the data in the topology
c. It generates data for processing in the topology
d. It monitors the health of the worker nodes in the cluster
Answer: a. It manages the topology and distributes it to the worker nodes
Explanation: The Nimbus node in Apache Storm is the master node in the cluster. It manages the topology, distributes it to the worker nodes, and monitors the health of the workers.
23. What is the role of the ZooKeeper node in Apache Storm?
a. It provides distributed coordination for the cluster
b. It processes the data in the topology
c. It generates data for processing in the topology
d. It monitors the health of the worker nodes in the cluster
Answer: a. It provides distributed coordination for the cluster
Explanation: The ZooKeeper node in Apache Storm provides distributed coordination for the cluster. It is used to store the configuration information for the cluster and to coordinate the communication between the nodes.
24. What is the role of the supervisor nodes in Apache Storm?
a. They run the worker processes that execute the topology
b. They monitor the health of the worker nodes in the cluster
c. They distribute the topology to the worker nodes
d. They generate data for processing in the topology
Answer: a. They run the worker processes that execute the topology
Explanation: The supervisor nodes in Apache Storm are responsible for running the worker processes that execute the topology. They are responsible for managing the resources on the worker nodes and ensuring that the worker processes are running correctly.
25. What is the role of the worker nodes in Apache Storm?
a. They execute the topology
b. They manage the topology
c. They generate data for processing in the topology
d. They monitor the health of the other nodes in the cluster
Answer: a. They execute the topology
Explanation: The worker nodes in Apache Storm are responsible for executing the topology. They receive the topology from the Nimbus node and run the worker processes that process the data.
26. What is the difference between a bolt and a spout in Apache Storm?
a. A bolt processes the data, while a spout generates the data
b. A bolt generates the data, while a spout processes the data
c. A bolt runs in parallel, while a spout runs sequentially
d. A bolt runs on the master node, while a spout runs on the worker nodes
Answer: a. A bolt processes the data, while a spout generates the data
Explanation: In Apache Storm, a spout is responsible for generating data and emitting it to the topology, while a bolt is responsible for processing the data that is emitted by the spout or by other bolts in the topology.
27. What is the role of the tuple in Apache Storm?
a. It represents a piece of data that is being processed by the topology
b. It represents a worker process in the cluster
c. It represents a node in the cluster
d. It represents a supervisor node in the cluster
Answer: a. It represents a piece of data that is being processed by the topology
Explanation: In Apache Storm, a tuple represents a piece of data that is being processed by the topology. Tuples are emitted by the spouts and are passed through the bolts for processing.
28. What is the default parallelism hint for a spout in Apache Storm?
a. 1
b. 2
c. 5
d. 10
Answer: a. 1
Explanation: The default parallelism hint for a spout in Apache Storm is 1. This means that by default, only one instance of the spout will be created and run in the cluster.
29. What is the maximum parallelism hint for a bolt in Apache Storm?
a. 1
b. 2
c. 5
d. Unlimited
Answer: d. Unlimited
Explanation: The maximum parallelism hint for a bolt in Apache Storm is unlimited. This means that the number of instances of the bolt that are created and run in the cluster can be increased to handle larger volumes of data.
30. What is the role of the acker in Apache Storm?
a. It ensures that tuples are processed in the correct order
b. It tracks the progress of tuples through the topology
c. It monitors the health of the worker nodes in the cluster
d. It generates data for processing in the topology
Answer: b. It tracks the progress of tuples through the topology
Explanation: In Apache Storm, the acker is responsible for tracking the progress of tuples through the topology. It ensures that tuples are processed correctly and that no data is lost.
31. What is the purpose of a grouping in Apache Storm?
a. It determines how tuples are distributed among the worker nodes
b. It determines the order in which tuples are processed by the topology
c. It determines the parallelism hint for the spouts and bolts
d. It determines the number of worker nodes in the
Answer: a. It determines how tuples are distributed among the worker nodes
Explanation: In Apache Storm, a grouping is used to determine how tuples are distributed among the worker nodes in the cluster. Groupings specify the relationships between the spouts and bolts in the topology and determine how the tuples are routed between them.
32. Which of the following is a valid grouping type in Apache Storm?
a. Global grouping
b. Shuffle grouping
c. All grouping
d. None of the above
Answer: b. Shuffle grouping
Explanation: Shuffle grouping is a valid grouping type in Apache Storm. It is used to distribute tuples randomly among the worker nodes in the cluster.
33. What is the purpose of a task in Apache Storm?
a. It represents a worker node in the cluster
b. It represents a spout or bolt instance in the topology
c. It represents a supervisor node in the cluster
d. It represents a Nimbus node in the cluster
Answer: b. It represents a spout or bolt instance in the topology
Explanation: In Apache Storm, a task represents a spout or bolt instance in the topology. Each task runs on a worker node in the cluster and is responsible for processing a subset of the data that is being processed by the topology.
34. Which of the following is a valid task parallelism hint in Apache Storm?
a. 1
b. 2
c. 5
d. Unlimited
Answer: d. Unlimited
Explanation: The task parallelism hint in Apache Storm is unlimited. This means that the number of instances of a spout or bolt that are created and run in the cluster can be increased to handle larger volumes of data.
35. What is a tick tuple in Apache Storm?
a. It represents a piece of data that is being processed by the topology
b. It represents a worker process in the cluster
c. It represents a node in the cluster
d. It is used to trigger periodic tasks in the topology
Answer: d. It is used to trigger periodic tasks in the topology
Explanation: In Apache Storm, a tick tuple is a special type of tuple that is used to trigger periodic tasks in the topology. Tick tuples are emitted by the system at regular intervals and are used to perform actions such as flushing buffers or emitting metrics.
36. What is the purpose of the Storm UI in Apache Storm?
a. It provides a web-based interface for monitoring and managing topologies
b. It is used to execute commands on the worker nodes in the cluster
c. It is used to manage the ZooKeeper cluster that is used by Storm
d. It is used to manage the Nimbus nodes in the cluster
Answer: a. It provides a web-based interface for monitoring and managing topologies
Explanation: The Storm UI in Apache Storm provides a web-based interface for monitoring and managing topologies. It allows users to view real-time metrics, logs, and status information for the topologies that are running in the cluster.
37. Which of the following is a valid metric in Apache Storm?
a. Total emitted tuples
b. Average temperature
c. Total number of cores in the cluster
d. Total number of files in the HDFS
Answer: a. Total emitted tuples
Explanation: Total emitted tuples is a valid metric in Apache Storm. It represents the total number of tuples that have been emitted by the spouts in the topology.
38. What is the purpose of the Storm Multilang protocol?
a. It is used to communicate between the Nimbus and worker nodes in the cluster
b. It is used to communicate between the supervisor and worker nodes in the cluster
c. It is used to communicate between the spouts and bolts in the topology
d. It is used to communicate between the Storm cluster and external components
Answer: c. It is used to communicate between the spouts and bolts in the topology
Explanation: The Storm Multilang protocol is used to communicate between the spouts and bolts in the topology. It allows developers to write components in any language that can read from and write to STDIN and STDOUT, and then integrate those components into the topology using the Multilang protocol.
39. What is the purpose of the Storm shell module?
a. It allows developers to run Storm topologies from the command line
b. It allows developers to create and manage Storm topologies using a CLI
c. It allows developers to test Storm topologies locally without deploying to a cluster
d. It allows developers to run external processes from within a Storm topology
Answer: b. It allows developers to create and manage Storm topologies using a CLI
Explanation: The Storm shell module allows developers to create and manage Storm topologies using a command-line interface (CLI). It provides a set of commands for creating, deploying, and managing topologies in the Storm cluster.
40. What is the purpose of the Trident API in Apache Storm?
a. It provides a high-level API for building topologies using abstractions such as spouts and bolts
b. It provides a set of tools for testing and debugging Storm topologies
c. It provides a distributed stream processing framework that is built on top of Storm
d. It provides a set of abstractions for working with batches of data in Storm
Answer: d. It provides a set of abstractions for working with batches of data in Storm
Explanation: The Trident API in Apache Storm provides a set of abstractions for working with batches of data in Storm. It allows developers to perform operations such as grouping, filtering, and aggregating data in a more structured way than is possible with the standard Storm API.
41. What is the purpose of the Storm-HBase integration?
a. It allows Storm topologies to read and write data from HBase tables
b. It allows Storm topologies to store intermediate data in HBase
c. It allows Storm topologies to replicate data across multiple HBase clusters
d. It allows Storm topologies to manage the HBase cluster used by Storm
Answer: a. It allows Storm topologies to read and write data from HBase tables
Explanation: The Storm-HBase integration allows Storm topologies to read and write data from HBase tables. It provides a set of spouts and bolts that can be used to integrate HBase data into a Storm topology.
42. What is the purpose of the Storm-Kafka integration?
a. It allows Storm topologies to read and write data from Kafka topics
b. It allows Storm topologies to store intermediate data in Kafka
c. It allows Storm topologies to replicate data across multiple Kafka clusters
d. It allows Storm topologies to manage the Kafka cluster used by Storm
Answer: a. It allows Storm topologies to read and write data from Kafka topics
Explanation: The Storm-Kafka integration allows Storm topologies to read and write data from Kafka topics. It provides a set of spouts and bolts that can be used to integrate Kafka data into a Storm topology.
43. Which of the following is a valid Storm-Kafka spout configuration option?
a. zookeeper.connect
b. kafka.topic
c. spout.batch.size
d. All of the above
Answer: d. All of the above
Explanation: All of the options listed (zookeeper.connect, kafka.topic, and spout.batch.size) are valid configuration options for a Storm-Kafka spout. They are used to specify the connection details for the Kafka cluster, the name of the topic to read from, and the size of the batches
44. What is the purpose of the Storm-HDFS integration?
a. It allows Storm topologies to read and write data from HDFS
b. It allows Storm topologies to store intermediate data in HDFS
c. It allows Storm topologies to replicate data across multiple HDFS clusters
d. It allows Storm topologies to manage the HDFS cluster used by Storm
Answer: a. It allows Storm topologies to read and write data from HDFS
Explanation: The Storm-HDFS integration allows Storm topologies to read and write data from HDFS. It provides a set of spouts and bolts that can be used to integrate HDFS data into a Storm topology.
45. Which of the following is a valid configuration option for a Storm-HDFS bolt?
a. hdfs.path
b. hdfs.output.format
c. bolt.parallelism
d. All of the above
Answer: d. All of the above
Explanation: All of the options listed (hdfs.path, hdfs.output.format, and bolt.parallelism) are valid configuration options for a Storm-HDFS bolt. They are used to specify the output path in HDFS, the output format for the data, and the number of tasks to use for the bolt.
46. What is the purpose of the Storm-Cassandra integration?
a. It allows Storm topologies to read and write data from Cassandra tables
b. It allows Storm topologies to store intermediate data in Cassandra
c. It allows Storm topologies to replicate data across multiple Cassandra clusters
d. It allows Storm topologies to manage the Cassandra cluster used by Storm
Answer: a. It allows Storm topologies to read and write data from Cassandra tables
Explanation: The Storm-Cassandra integration allows Storm topologies to read and write data from Cassandra tables. It provides a set of spouts and bolts that can be used to integrate Cassandra data into a Storm topology.
47. Which of the following is a valid configuration option for a Storm-Cassandra bolt?
a. cassandra.hosts
b. cassandra.keyspace
c. bolt.parallelism
d. All of the above
Answer: d. All of the above
Explanation: All of the options listed (cassandra.hosts, cassandra.keyspace, and bolt.parallelism) are valid configuration options for a Storm-Cassandra bolt. They are used to specify the hosts for the Cassandra cluster, the keyspace to use, and the number of tasks to use for the bolt.
48. What is the purpose of the Storm-Solr integration?
a. It allows Storm topologies to read and write data from Solr indexes
b. It allows Storm topologies to store intermediate data in Solr
c. It allows Storm topologies to replicate data across multiple Solr clusters
d. It allows Storm topologies to manage the Solr cluster used by Storm
Answer: a. It allows Storm topologies to read and write data from Solr indexes
Explanation: The Storm-Solr integration allows Storm topologies to read and write data from Solr indexes. It provides a set of spouts and bolts that can be used to integrate Solr data into a Storm topology.
49. Which of the following is a valid configuration option for a Storm-Solr bolt?
a. solr.collection
b. solr.hosts
c. bolt.parallelism
d. All of the above
Answer: d. All of the above
Explanation: All of the options listed (solr.collection, solr.hosts, and bolt.parallelism) are valid configuration options for a Storm-Solr bolt. They are used to specify the Solr collection to use, the hosts for the Solr cluster, and the number of tasks to use for the bolt.
50. Which of the following is a valid configuration option for a Storm-Kafka spout?
a. kafka.topic
b. kafka.bootstrap.servers
c. spout.parallelism
d. All of the above
Answer: d. All of the above
Explanation: All of the options listed (kafka.topic, kafka.bootstrap.servers, and spout.parallelism) are valid configuration options for a Storm-Kafka spout. They are used to specify the Kafka topic to consume, the bootstrap servers for the Kafka cluster, and the number of tasks to use for the spout.
51. What is the purpose of the Storm-RabbitMQ integration?
a. It allows Storm topologies to read and write data from RabbitMQ queues
b. It allows Storm topologies to store intermediate data in RabbitMQ
c. It allows Storm topologies to replicate data across multiple RabbitMQ clusters
d. It allows Storm topologies to manage the RabbitMQ cluster used by Storm
Answer: a. It allows Storm topologies to read and write data from RabbitMQ queues
Explanation: The Storm-RabbitMQ integration allows Storm topologies to read and write data from RabbitMQ queues. It provides a set of spouts and bolts that can be used to integrate RabbitMQ data into a Storm topology.
52. Which of the following is a valid configuration option for a Storm-RabbitMQ spout?
a. rabbitmq.queue
b. rabbitmq.host
c. spout.parallelism
d. All of the above
Answer: d. All of the above
Explanation: All of the options listed (rabbitmq.queue, rabbitmq.host, and spout.parallelism) are valid configuration options for a Storm-RabbitMQ spout. They are used to specify the RabbitMQ queue to consume, the host for the RabbitMQ cluster, and the number of tasks to use for the spout.
53. What is the purpose of the Storm-JDBC integration?
a. It allows Storm topologies to read and write data from JDBC-compliant databases
b. It allows Storm topologies to store intermediate data in a JDBC-compliant database
c. It allows Storm topologies to replicate data across multiple JDBC-compliant databases
d. It allows Storm topologies to manage the JDBC connection pool used by Storm
Answer: a. It allows Storm topologies to read and write data from JDBC-compliant databases
Explanation: The Storm-JDBC integration allows Storm topologies to read and write data from JDBC-compliant databases. It provides a set of spouts and bolts that can be used to integrate database data into a Storm topology.
54. Which of the following is a valid configuration option for a Storm-JDBC bolt?
a. jdbc.driver.class
b. jdbc.connection.url
c. bolt.parallelism
d. All of the above
Answer: d. All of the above
Explanation: All of the options listed (jdbc.driver.class, jdbc.connection.url, and bolt.parallelism) are valid configuration options for a Storm-JDBC bolt. They are used to specify the JDBC driver class to use, the connection URL for the database, and the number of tasks to use for the bolt.
55. Which of the following is a valid configuration option for a Storm-HBase bolt?
a. hbase.table
b. hbase.zookeeper.quorum
c. bolt.parallelism
d. All of the above
Answer: d. All of the above
Explanation: All of the options listed (hbase.table, hbase.zookeeper.quorum, and bolt.parallelism) are valid configuration options for a Storm-HBase bolt. They are used to specify the HBase table to use, the ZooKeeper quorum for the HBase cluster, and the number of tasks to use for the bolt.
We hope that candidates who are looking for Apache Storm MCQs with answers have found this article to be worthwhile. Please continue to visit our Freshersnow website frequently to access more technical quizzes.