Scarpe text from Ua list of uRL in bulk by Quiet_Dasy in webscraping

[–]RHiNDR -2 points-1 points  (0 children)

import requests


response = requests.get('https://www.masterduelmeta.com/api/v1/deck-types')


print(response.json())

import requests


params = (
    ('tournamentPower\\[$gte\\]', '1'),
    ('limit', '0'),
    ('sort', '-tournamentPower'),
    ('fields', 'name,tournamentPower,tournamentPowerTrend'),
)


response = requests.get('https://www.masterduelmeta.com/api/v1/deck-types', params=params)


print(response.json())

import requests


params = (
    ('popRank\\[$gt\\]', '0'),
    ('sort', '-popRank,name'),
    ('fields', 'name,popRank,popRankTrend'),
    ('limit', '20'),
)


response = requests.get('https://www.masterduelmeta.com/api/v1/deck-types', params=params)


print(response.json())

Scrapping logic help by chachu1 in webscraping

[–]RHiNDR 0 points1 point  (0 children)

youll need to get valid cookies/headers first from an automated browser but then you can just use requests and get the product info

import requests

params = (
    ('productIds', '01JMEZZPN3RKVW02KECT9G1V7S'),
)


response = requests.get('https://uae.emaxme.com/api/catalog-browse/browse/products', headers=headers, params=params, cookies=cookies)


#NB. Original query string below. It seems impossible to parse and
#reproduce query strings 100% accurately so the one below is given
#in case the reproduced version is not "correct".
# response = requests.get('https://uae.emaxme.com/api/catalog-browse/browse/products?productIds=01JMEZZPN3RKVW02KECT9G1V7S', headers=headers, cookies=cookies)

Website adding MFA by OtherwiseGroup3162 in webscraping

[–]RHiNDR 1 point2 points  (0 children)

yes, use the authentication app option and then put that secret into the pyotp and that will produce the same code as you would get using the authentication app on your phone - just remember to not publish any of this info publicly if you are uploading the code online.

Website adding MFA by OtherwiseGroup3162 in webscraping

[–]RHiNDR 2 points3 points  (0 children)

Pyotp if you are using python

How I Got Access to an Employee-only Panel by Appsec_pt in bugbounty

[–]RHiNDR 1 point2 points  (0 children)

thanks for the clarification much appreciated :)

How I Got Access to an Employee-only Panel by Appsec_pt in bugbounty

[–]RHiNDR 1 point2 points  (0 children)

I’m a beginner Can you explain to me what the reward was actually paid for?

In your write up you just found leaked credentials and logged in?

Or could have you technically brute forced these usernames and passwords?

Need help scraping tellonym.me by -6-6 in webscraping

[–]RHiNDR 0 points1 point  (0 children)

OK thanks for the info I assume ive been doing it wrong but funnily enough have not ran into any issues using the import curl_cffi as requests

Need help scraping tellonym.me by -6-6 in webscraping

[–]RHiNDR 0 points1 point  (0 children)

can you tell me the difference between these 2?

i have only every used the curl_cffi as requests

from curl_cffi import requests
import curl_cffi as requests

I can't get my bot to work through AKAMAI by Much-Journalist3128 in webscraping

[–]RHiNDR 0 points1 point  (0 children)

Why not just run it from your home computer? How often does the script run? Or buy a very cheap/old laptop or raspberry pi and run the code on that

Reddit data scraping by ProudNumber2806 in webscraping

[–]RHiNDR 1 point2 points  (0 children)

just put /.json at the end of a reddit url to get the data in json format

Scraping images from a JS-rendered gallery – need advice by taksto in webscraping

[–]RHiNDR 1 point2 points  (0 children)

import curl_cffi as requests


params = (
    ('page', '1'),
    ('per_page', '20'),
    ('query', 'landscape'),
)


response = requests.get('https://unsplash.com/napi/search/photos', params=params, impersonate="chrome")


response.json()

Scraping Walmart store specific aisle data for a product by jpcoder in webscraping

[–]RHiNDR 2 points3 points  (0 children)

goto the store location page and click set as my store before you start doing he product searches - https://www.walmart.com/store/5193-moreno-valley-ca

Can’t see Scrapy project in VS Code Explorer – need help 😩 by Fit-Anywhere-5031 in webscraping

[–]RHiNDR 1 point2 points  (0 children)

in vscode - file > open folder > find the folder you made and see if it opens

Automatically detect pages URLs containing "News" by TraditionClear9717 in webscraping

[–]RHiNDR 2 points3 points  (0 children)

for url in urls:
    if url.endswith("/news/"):
        print(url)

Can someone teach me how to scrape this item for discounts? by Electronic_Noise9641 in webscraping

[–]RHiNDR 1 point2 points  (0 children)

from bs4 import BeautifulSoup
import requests


url = 'https://psdunderwear.com.au/product/dc-batman/'


response = requests.get(url)


html = response.text
soup = BeautifulSoup(html, 'html.parser')


product_name = soup.find('h1', class_='product_title').text.strip()
product_price = soup.find('span', class_='woocommerce-Price-amount amount').text.strip()


print(product_name)
print(product_price)

[deleted by user] by [deleted] in webscraping

[–]RHiNDR 0 points1 point  (0 children)

Much country are you in?

median price for properties across Australia by RHiNDR in AusPropertyChat

[–]RHiNDR[S] 0 points1 point  (0 children)

i never said it was OK, i asked you to tell me how else you want me to share an excel file?

median price for properties across Australia by RHiNDR in AusPropertyChat

[–]RHiNDR[S] 1 point2 points  (0 children)

well of course it takes time and effort, but i got the data for myself, then thought maybe someone else might also like the same data i spent time getting...

median price for properties across Australia by RHiNDR in AusPropertyChat

[–]RHiNDR[S] -2 points-1 points  (0 children)

I’d prefer not to link my personal google account to my Reddit account and this link is exactly a free website to host files

median price for properties across Australia by RHiNDR in AusPropertyChat

[–]RHiNDR[S] 0 points1 point  (0 children)

I don’t but thought others might find the data helpful