I want to get and print all the Urls of the products in one of the AliExpress pages, it worked but just the first 12 urls that show and not all the urls on the page, normally it should be 60 urls because every page in AliExpress contains 60 products. How can I extract all the 60 urls. Any help is highly appreciated.
This is my code :
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from time import sleep
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
driver.get("https://www.aliexpress.com/wholesale?trafficChannel=main&d=y&CatId=0&SearchText=smart+lock<ype=wholesale&SortType=total_tranpro_desc")
urls = [a.get_attribute('href') for a in driver.find_elements(By.CSS_SELECTOR,'a[href*="/item/"]')]
for url in urls:
print(url)
The web page there not presenting all the products. It uses specific loading strategy to present only currently visible products.
To grab all the products you will have to scroll the page collecting the presented products during the scrolling. The URLs should be stored in set to avoid duplicates.
I tried the following simple solution and it worked for me:
The output is: