I was asked to solve this problem in a interview:
Suppose there are 4 million comments each with its own id and timestamp. Design an efficient algorithm that finds the most recent 1000 comments. You have 40 servers, and each server can handle 10 thousand comments at a time.
I was thinking about using MapReduce. How do i implement map and reduce function in order to to solve this problem?
Processing big data on distributed system
109 Views Asked by Yichuan Wang At
1
There are 1 best solutions below
Related Questions in HADOOP
- Can anyoone help me with this problem while trying to install hadoop on ubuntu?
- Hadoop No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster)
- Top-N using Python, MapReduce
- Spark Driver vs MapReduce Driver on YARN
- ERROR: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "maprfs"
- can't write pyspark dataframe to parquet file on windows
- How to optimize writing to a large table in Hive/HDFS using Spark
- Can't replicate block xxx because the block file doesn't exist, or is not accessible
- HDFS too many bad blocks due to "Operation category WRITE is not supported in state standby" - Understanding why datanode can't find Active NameNode
- distcp throws java.io.IOException when copying files
- Hadoop MapReduce WordPairsCount produces inconsistent results
- If my data is not partitioned can that be why I’m getting maxResultSize error for my PySpark job?
- resource manager and nodemanager connectivity issues
- ERROR flume.SinkRunner: Unable to deliver event
- converting varchar(7) to decimal (7,5) in hive
Related Questions in MAPREDUCE
- Hadoop No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster)
- Top-N using Python, MapReduce
- Spark Driver vs MapReduce Driver on YARN
- Hadoop MapReduce WordPairsCount produces inconsistent results
- Hadoop MiniCluster Web UI
- Java lang runtime exception or jar file does not exist error
- basic python but wierd problem in hadoop-stream text value changes in MapReduce
- Hadoop is writing to file using context.write() but output file turns out empty
- Error while executing load_summarize_chain with custom prompts
- Apache Crunch Job On AWS EMR using Oozie
- Hadoop MapReducee WordCountLength - Type mismatch in key from map: expected org.apache.hadoop.io.Text, received org.apache.hadoop.io.IntWritable
- Error: java.io.IOException: wrong value class: class org.apache.hadoop.io.Text is not class org.apache.hadoop.io.FloatWritable
- I'm having trouble with a map reduce script
- No Output for MapReduce Program even after successful job completion on Cloudera VM
- Context.write method returns wrong result in Mapreduce java
Related Questions in BIGDATA
- How to make an R Shiny app with big data?
- Liquibase as SaaS To Configure Multiple Database as Dynamic
- how to visualize readible big datasets with matplotlib?
- Are there techniques to mathematically compute the amount of searching in greedy graph searching?
- Pyspark & EMR Serialized task 466986024 bytes, which exceeds max allowed: spark.rpc.message.maxSize (134217728 bytes)
- Is there a better way to create a custom analytics dashboard tailored for different users?
- Trigger a lambda function/url with Apache Superset
- How to download, then archive and send zip to the user without storing data in RAM and memory?
- Using bigmemory package in R to solve the Ram memory problem
- spark - How is it even possible to get an OOM?
- Aws Athena SQL Query is not working in Apache spark
- DB structure/file formats to persist a 100TB table and support efficient data skipping with predicates in Spark SQL
- How can I make this matching function faster in R? It currently takes 6-7 days, and this is not practical
- K-means clustering time series data
- Need help related to Data Sets
Related Questions in GFS
- How to using for loop to download GFS url data using Siphon
- Grib2 data extraction with xarray and cfgrib very slow, how to improve the code?
- How to add new column in dataframe based on forecast cycle 00 and 12 UTC using pandas
- How to open multiple files in a path skiiping files with a determined name
- Xarray mfdataset combining files with different variables using cfgrib engine
- GFS model and clouds
- Getting GRIB2 Lat/Lon Information from GDAL
- How to properly parse NOAA GFS Grib2 files using GPS Coordinates?
- how to use pygrib expand_reduce functionality?
- How google doc are stored on disk?
- Processing big data on distributed system
- The structure of the GFS checkpoint
- How do I get access to the noaa rest api of the gfs data
- How to integrate yii2 application with some maps(like a google map)
- Google File System Application to Client Communication
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
As the question specifically asks about efficient algorithms, I suspect the interviewer cares less about techniques like MapReduce but more the underlying algorithm you will use. This seems like an application of Merge Sort. In this case you would divide the work load into 10K chunks, assign each chunk for sorting on the nodes and merge. Once complete you should have all 4 million entries sorted by date, you can than take the most recent 1000 entries. This algorithm would run in O(n log n)