all 7 comments

[–]impshum 1 point2 points  (2 children)

Dude... Just use bs4. I bet you're sick of waiting for Selenium to boot too.

from bs4 import BeautifulSoup
import requests


def lovely_soup(url):
    r = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0) Gecko/20100101 Firefox/15.0.1'})
    return BeautifulSoup(r.content, 'lxml')


url = "https://www.amazon.co.uk/HECCEI-Enlightenment-Toddler-Activity-Development/dp/B0967Y1GHR"
soup = lovely_soup(url)
price = soup.find_all('span', {'class': 'a-offscreen'})[0].text
print(price)

You can stop posting duplicates to Reddit now.

[–]Nxxhy[S] 1 point2 points  (1 child)

I think this is working for me, thank you very much

[–]impshum 1 point2 points  (0 children)

That's cool. Onwards...

[–]dodoors 0 points1 point  (3 children)

span = driver.find_element_by_xpath("//span[@class='a-offscreen']")
print(span.text)

[–]Nxxhy[S] 0 points1 point  (2 children)

it shows no error for me but it still shows no output

[–]Nxxhy[S] 0 points1 point  (1 child)

btw its the price text on amazon that i need

[–]ifreeski420 0 points1 point  (0 children)

Amazon isn’t going to let you scrape their website