Recently I have developed a ML model for classification problem and now would like to put in the production to do classification on actual production data, while exploring I have came across two methods deploying and serving ML model what is the basic difference between them ?
What is the difference between Deploying and Serving ML model?
1.6k Views Asked by alex3465 At
1
There are 1 best solutions below
Related Questions in MACHINE-LEARNING
- Trained ML model with the camera module is not giving predictions
- Keras similarity calculation. Enumerating distance between two tensors, which indicates as lists
- How to get content of BLOCK types LAYOUT_TITLE, LAYOUT_SECTION_HEADER and LAYOUT_xx in Textract
- How to predict input parameters from target parameter in a machine learning model?
- The training accuracy and the validation accuracy curves are almost parallel to each other. Is the model overfitting?
- ImportError: cannot import name 'HuggingFaceInferenceAPI' from 'llama_index.llms' (unknown location)
- Which library can replace causal_conv1d in machine learning programming?
- Fine-Tuning Large Language Model on PDFs containing Text and Images
- Sketch Guided Text to Image Generation
- My ICNN doesn't seem to work for any n_hidden
- Optuna Hyperband Algorithm Not Following Expected Model Training Scheme
- How can I resolve this error and work smoothly in deep learning?
- ModuleNotFoundError: No module named 'llama_index.node_parser'
- Difference between model.evaluate and metrics.accuracy_score
- Give Bert an input and ask him to predict. In this input, can Bert apply the first word prediction result to all subsequent predictions?
Related Questions in DEPLOYMENT
- Github Pages Deployment deploys a blank page
- Django Admin Panel and Sub URLs Returning 404 Error on Deployment
- Next 14 App Router pages from dynamic routes not generating when deployed on vercel but only work on localhost
- Deployment through app engine, cloud sql database, problem connecting with server code, doesn't connect
- How to Deploy and Manage a Python Application with Systemd
- Elasticbeanstalk FastAPI application is intermittently not responding to https requests
- Duplicate GET requests - Rails & Heroku
- How to use a proxy to obtain a static IP for my Node.js application?
- Next js app throwing 404 error when deployed to vercel, even though it works fine on local
- How to deploy my shiny application (with multiple files) via Docker
- Deploying telegram bot
- How to deploy angular 17 SSR into IIS
- Route not working on refreshing the page in react deployed application
- Vercel wildcard route's src results in 404 error in Hapi.js backend
- Django deployment with GTK3
Related Questions in DATA-SCIENCE
- KEDRO - How to specify an arbitrary binary file in catalog.yml?
- Struggling to set up a sparse matrix problem to complete data analysis
- How do I remove slashes and copy the values into many other rows in pandas?
- Downloading full records from Entrez
- Error While calling "from haystack.document_stores import ElasticsearchDocumentStore"
- How to plot time series from 2 columns (Date and Value) by Python google colab?
- How to separate Hijri (Arabic) and Gregorian date ranges from on column to separate columns
- How to wait the fully download of a file with selenium(firefox) in python
- Survey that collects anonymous results, but tracks which recipient have responded
- Dataframe isin function Buffer was wrong number of dimensions error
- How to add different colours in an Altair grouped bar chart in python?
- Python Sorting list of dictionaries with nested list
- Float Division by Zero Error with Function Telling Greatest Power of a Number Dividing Another Number
- If a row contains at least two not NaN values, split the row into two separate ones
- DATA_SOURCE_NOT_FOUND Failed to find data source: mlflow-experiment. Please find packages at `https://spark.apache.org/third-party-projects.html
Related Questions in SERVING
- How to write a config file for my ensemble model using triton-inference-server
- how to serve static files and media files in c panel for django project?
- Is there a design pattern to serve data/files from a hierarchical menu so that data may exist in multiple menus without duplicating data?
- Nginx Caching Content Config
- Why doesn't NGINX apply header response when serving an image?
- Unrecognized content type parameters: format when serving model on databricks experiement
- Azure Databricks model serving mlflow error version
- Model Serving Databricks Status Failed
- ML serving using either kserve seldon or bentoml
- Output of model after serving different with keras model output
- tensorflow keras savedmodel lost inputs name and add unknow inputs
- Custom MLFlow scoring_server for model serving
- Runtime ~100X higer when return a graph with tf.function and serving
- How do I invoke a data enrichment function before model.predict while serving the model in Databricks
- Serving Static on AWS - Django - Python
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Based on my own readings and understanding, here's the difference:
Deploying = it means that you want to create a server/api (e.g. REST API) so that it will be able to predict on new unlabelled data
Serving = it acts as a server that is specialized for predict models. The idea is that it can serve multiple models with different requests.
Basically, if your use case requires deploying multiple ML models, you might want to look for serving like torchServe. But if it's just one model, for me, Flask is already good enough.
Reference:
Pytorch Deploying using flask
TorchServe