Web Page Scraping with BeautifulSoup

113 Views Asked by At

I am new and I am trying to the links of each products for this web page all subpages (1-8): https://www.sodimac.cl/sodimac-cl/category/scat359268/Esmaltes-al-agua

I have a loop to go over each page but for some reason on page 7 it only brings 20 products and no products on page 8

This is the function that gets me all the URL for each product on each page:

def get_all_product_url(base_url):
    # Set up link and gets all URLs
    page = requests.get(base_url, stream=True)
    soup = BeautifulSoup(page.content, 'html.parser',from_encoding='utf-8')
    url_list = []
    try:
        products = soup.find_all('div', {'class':'jsx-3418419141 product-thumbnail'})
    except:
        return url_list
    for i in products:
        url = i.find("a").get('href')
        if 'https://www.sodimac.cl' in url:
            url_list.append(url)
        else:
            url_list.append('https://www.sodimac.cl'+url)
    # Return all web address without duplicates
    return list(set(url_list))

When I run it for page 8 I get an emply list

base_url = "https://www.sodimac.cl/sodimac-cl/category/scat359268/Esmaltes-al-agua?currentpage=8"
page = requests.get(base_url, stream=True)
soup = BeautifulSoup(page.content, 'html.parser',from_encoding='utf-8')
url_list = get_all_product_url(base_url)
url_list

If you run it for page 1, you will get 28 entries

base_url = "https://www.sodimac.cl/sodimac-cl/category/scat359268/Esmaltes-al-agua?currentpage=1"
page = requests.get(base_url, stream=True)
soup = BeautifulSoup(page.content, 'html.parser',from_encoding='utf-8')
url_list = get_all_product_url(base_url)
url_list

Any help I reall appreciate it.

Thanks

0

There are 0 best solutions below