Why xpath output giving a blank output while crawling a amazon website. Kindly help to solve the below problem

81 Views Asked by At

below is the code, which I have used to crawl amazon site but output coming blank value. pls help

from bs4 import BeautifulSoup
import pandas as pd
from lxml import etree
import requests
import time

HEADERS = ({'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Accept-Language': 'en-US, en;q=0.5'})
data = pd.DataFrame([])
    
URL= "https://www.amazon.in/dp/B09NM3WWGY"
webpage = requests.get(URL, headers=HEADERS)
soup = BeautifulSoup(webpage.content, "lxml")
dom = etree.HTML(str(soup))
    
Price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#if price!=None:
    #price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#else:
    #price = "No Data"

print(Price)

output getting blank

1

There are 1 best solutions below

0
Jaydeb Bhunia On

Remove tbody from the xpath & then working perfectly
from bs4 import BeautifulSoup import pandas as pd from lxml import etree import requests import time

HEADERS = ({'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Accept-Language': 'en-US, en;q=0.5'})
data = pd.DataFrame([])
    
URL= "https://www.amazon.in/dp/B09NM3WWGY"
webpage = requests.get(URL, headers=HEADERS)
soup = BeautifulSoup(webpage.content, "lxml")
dom = etree.HTML(str(soup))
    
Price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tr[2]/td[2]/span/span/text()"))
#if price!=None:
    #price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#else:
    #price = "No Data"

print(Price)