What is the main difference between bucketing and indexing of a table in Hive?
Hive(Bigdata)- difference between bucketing and indexing
2.1k Views Asked by anusngh At
1
There are 1 best solutions below
Related Questions in HADOOP
- pcap to Avro on Hadoop
- schedule and automate sqoop import/export tasks
- How to diagnose Kafka topics failing globally to be found
- Only 32 bit available in Oracle VM - Hadoop Installation
- Using HDFS with Apache Spark on Amazon EC2
- How to get raw hadoop metrics
- How to output multiple values with the same key in reducer?
- Loading chararray from embedded JSON using Pig
- Oozie Pig action stuck in PREP state and job is in RUNNING state
- InstanceProfile is required for creating cluster - create python function to install module
- mapreduce job not setting compression codec correctly
- What does namespace and block pool mean in MapReduce 2.0 YARN?
- Hadoop distributed mode
- Building apache hadoop 2.6.0 throwing maven error
- I am using Hbase 1.0.0 and Apache phoenix 4.3.0 on CDH5.4. When I restart Hbase regionserver is down
Related Questions in MAPREDUCE
- pcap to Avro on Hadoop
- CouchDB sum by date range and type
- How to output multiple values with the same key in reducer?
- mapreduce job not setting compression codec correctly
- Split S3 files into multiple output files
- groupByKey not properly working in spark
- MapReduce job fails with ExitCodeException exitCode=255
- What is better way to send associative array through map/reduce at MongoDB?
- How to efficiently join two files using Hadoop?
- null pointer exception in getstrings method hadoop
- can you explain word count mapreduce program step by step
- How to efficiently find top-k elements?
- how to ignore key-value pair in Map-Reduce if values are blank?
- akka: pattern for combining messages from multiple children
- Map a table of a cassandra database using spark and RDD
Related Questions in HIVE
- How do I set the Hive user to something different than the Spark user from within a Spark program?
- schedule and automate sqoop import/export tasks
- PIG merge two lines in the log
- Elephant bird with hive to query protobuf file
- How can we decide the total no. of buckets for a hive table
- How to create a table in Hive with a column of data type array<map<string, string>>
- How to find number of unique connection using hive/pig
- sqoop-export is failing when I have \N as data
- How can we test expressions in hive
- Run Hive Query in R with Config
- Rhive: The messages shows: Not Connected to Hiveserver2 (But can connect HDFS)
- HIVE Query Deleting source data blob
- Hive JOIN of query with subquery takes forever
- What is Metadata DB Derby?
- How could I set the number or size of output files in an "insert" script?
Related Questions in BIGDATA
- How to add a new event to Apache Spark Event Log
- DB candidate as CouchDB/Schema replacement
- Getting java.lang.IllegalArgumentException: requirement failed while calling Sparks MLLIB StreamingKMeans from java application
- More than expected jobs running in apache spark
- Does Cassandra support aggregation function or any other capabilities like Map Reduce?
- Accessing a large number of unsorted array elements in Python
- What are the approaches to the Big-Data problems?
- Talend Open Studio for Big Data
- How to store and retrieve time series using google appengine using python
- Connecting Spark code from web application
- Designing an API on top of BigQuery
- Apache Spark architecture
- Hive(Bigdata)- difference between bucketing and indexing
- When does an action not run on the driver in Apache Spark?
- Use of core-site.xml in mapreduce program
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
The main difference is the goal:
Indexes become even more essential when the tables grow extremely large, and as you now undoubtedly know, Hive thrives on large tables.
It is usually used for join operations, because you can optimize joins by bucketing records by a specific 'key' or 'id'. In this way, when you want to do a join operation, records with the same 'key' will be in the same bucket and then the join operation will be faster. You can see this like a technique for decomposing data sets into more manageable parts. This link gives you 5 Tips for efficient Hive queries and one of them is about Bucketing.