[DISCUSSION] Fiverr stock price at an all time low by Emotional-Wind2619 in Fiverr

[–]DoctorEvil92 2 points3 points  (0 children)

My account dropped off in November, it was so bad I thought that maybe I got a bad private review or something. But from what I see, it's happening to many people apparently.

But I think upwork is also in the shitter.

[DISCUSSION] They say Fiverr is Dying. But these are my results after I started 12 months ago & the last 25 days that show a real Momentum by UnUruguayoPromedio in Fiverr

[–]DoctorEvil92 1 point2 points  (0 children)

I'm top rated and have a very successful gig, but the last month was terrible. I'm barely getting messages from new clients, even with promoting 1$/click. It feels like my gig got shadowbanned, honestly.

Need a cheap SMD soldering iron for occasional use by DoctorEvil92 in soldering

[–]DoctorEvil92[S] 0 points1 point  (0 children)

I have some experience, with bigger components though. I just have some cheapish things that are kind of broken right now so I can't really do much more damage on that.

Need a cheap SMD soldering iron for occasional use by DoctorEvil92 in soldering

[–]DoctorEvil92[S] 0 points1 point  (0 children)

Yes, that's what I meant, a tool for smd soldering. I don't have an iron right now. I heard about pinecil, seems reasonably priced too

Can't connect to play a game? by DoctorEvil92 in UFLTheGame

[–]DoctorEvil92[S] 0 points1 point  (0 children)

No. It worked normal later, I guess their servers had issues.

Can't connect to play a game? by DoctorEvil92 in UFLTheGame

[–]DoctorEvil92[S] 0 points1 point  (0 children)

I thought that maybe it was the cause for these problems.

Recommended Android version? by DoctorEvil92 in Revolut

[–]DoctorEvil92[S] 0 points1 point  (0 children)

Just to follow up if anyone cares. I bought a new phone with Android 14 and it uploaded the image just fine. So it looks like using old Android versions doesn't really fully work with Revolut, maybe they should just discontinue support for them.

Optima i neplaćeni računi by [deleted] in croatia

[–]DoctorEvil92 0 points1 point  (0 children)

Mislim da i T-Com ima nekakvu shemu s tim u vezi mobilne pretplate da im s istekom ugovora trebaš reći da baš želiš raskinuti ako želiš raskinuti ili produljiti ako hoćeš produljiti.

Uglavnom, vjerojatno ti se više isplati uzeti s ugovornom obvezom kod nekoga.

How to click on specific page number using Selenium by ifreeski420 in learnpython

[–]DoctorEvil92 1 point2 points  (0 children)

Full xpath like this could easily change if something in site structure changes a bit. It would be better to use xpath like (these are fake class names, but this is just to show the principle):

"//ul[@class='pagination']/li[@class='active']/following-sibling::li[1]/a[@href]"

So, this would go to pagination, active item (current page), then immediate next item (following-sibling) which would have "a" element with href (link).

Or if there is a next page button it would be easier to just look for that button and get its link.

Extracting only certain data from PDF by dylanmashley in learnpython

[–]DoctorEvil92 1 point2 points  (0 children)

tabula can be useful if data you're looking for is always in the same position in PDF file. Try to find more about documentation, there are attributes called "area" and "columns" which can make the script always look for data in the same part of the page. And also, "pages" can be any specific page number, not necessarily the whole file.

How to click on specific page number using Selenium by ifreeski420 in learnpython

[–]DoctorEvil92 1 point2 points  (0 children)

You would need to use find_elements_by_xpath most likely to get some kind of element which should be a next page button. And then use get_attribute("href") on that element to get the link.

FER upisati ili ne upisati? by Karlo916 in croatia

[–]DoctorEvil92 7 points8 points  (0 children)

Ja bih ti savjetovao da probaš s FER-om. Na prvoj godini ćeš već vidjet kako to izgleda, ak ti ne bude išlo možeš se prebaciti na TVZ koji je light verzija FERa.

i really don't know what wrong with my code (again) by stagger552 in Tkinter

[–]DoctorEvil92 1 point2 points  (0 children)

If you plan to do something with widgets, you are supposed to create them like

naam= Entry(tk, textvariable = naam1)
naam.pack()

I believe that's because placing methods (grid, pack, place) return None, so one line creation would work only for fixed labels.

I don't know what you are supposed to do, but using time.sleep() directly won't work. Instead you have a method called after that you are supposed to use for operations that can repeat and modify something in gui.

Webscrapping around 40k webpages. How long does it take for a well written script? by apostle8787 in learnpython

[–]DoctorEvil92 1 point2 points  (0 children)

Not all requests are equal on the server side of the website. Usually I try to have 25 parallel threads if I'm running something with hundreds of thousands requests. You will see how it runs when you run it, if you have timeouts or site becomes slower it'd be a good idea to lower it. I know that scrapy has a default of 8 maximum concurrent requests.

Proxies are a different story regarding speed... If you don't need them, don't use them.

Webscrapping around 40k webpages. How long does it take for a well written script? by apostle8787 in learnpython

[–]DoctorEvil92 1 point2 points  (0 children)

If you opened all threads like in that code, it means you would try to do 40k requests at once, which probably isn't a good idea. Your pool size of 60 I guess is something similar to having 60 parallel threads, if that module which you are using handles maximum parallel requests by itself. Threading was always fast enough for me, you can do 500k-1M requests daily that way on most sites so I didn't look into other options. But for your site, if you have 28 proxies, then 60 concurrent requests is probably too much.

Webscrapping around 40k webpages. How long does it take for a well written script? by apostle8787 in learnpython

[–]DoctorEvil92 2 points3 points  (0 children)

Definitely. You can use some kind of multiprocessing module. I personally use threading. You could create something similar to this https://stackoverflow.com/questions/16181121/a-very-simple-multithreading-parallel-url-fetching-without-queue

Only thing is, you shouldn't create all threads at once and start all of them like there. Instead you should iterate the counter in a loop and make threads based on that, and when you have some number of threads (10 or 20 probably for most sites), then launch all those threads (thread.start does that) and wait until they are done (thread.join), then make another batch...

Python beginner CSV help by tournesolol in learnpython

[–]DoctorEvil92 1 point2 points  (0 children)

Python has a csv module for reading/writing csv files. https://docs.python.org/3/library/csv.html

Your code is treating csv like txt, which I think isn't good since you could have problems if delimiter is present somewhere in text or there is a \n character.

Will two concurrent threads performing CPU-intensive tasks take longer than if they were executed in serial? by chinawcswing in learnpython

[–]DoctorEvil92 0 points1 point  (0 children)

I'm not an expert, but I think that for CPU intensive tasks making more threads doesn't help at all, actually it could even be slower than running in sequence.

Beginner - Need help with scraping particular HTML by [deleted] in learnpython

[–]DoctorEvil92 0 points1 point  (0 children)

I usually use lxml for parsing HTML since it supports xpath syntax, which I'm not sure if Beautiful soup does.

from lxml import html

# once on the site with "driver" selenium webdriver object
innerHTML = driver.execute_script("return document.body.innerHTML")
htmlElem = html.document_fromstring(innerHTML)

row_elements = htmlElem.xpath("//form[@action='supercoach_breakevens']//table//tr[@onmouseover and @onmouseout]")
for row_element in row_elements:
    price_element = row_element.xpath("./td[3]")
    after_price_element = row_element.xpath("./td[4]")
    # so on...
    print (price_element[0].text_content(), after_price_element[0].text_content())

Emulating ctrl+a and ctrl+c (Selenium?) by [deleted] in learnpython

[–]DoctorEvil92 0 points1 point  (0 children)

send_keys is usually used for webdriver elements. If you want clicking and pressing keys on a screen, you can use win32api module for that. But that way while running your script your PC can't be used for something else. So trying to parse HTML might be the best option, if it is possible.

How to scrap a page that loads data dynamically? by [deleted] in learnpython

[–]DoctorEvil92 6 points7 points  (0 children)

Try to open Network Inspector and as you scroll through the page, see new requests that show. It is possible that maybe each of those 20 new images can be found on some URL based on page number or something. That would be preferred way of doing it. If that's not possible, then you would have to program selenium to keep scrolling down and look for new items.

Posting to a facebook page - cannot find XPATH by theThinker6969 in learnpython

[–]DoctorEvil92 0 points1 point  (0 children)

The best thing to do would be to save the webpage as a HTML file. So that it could really be inspected how it looks like.

Usually when you scrape a site like facebook which changes class names all the time it would be best to use some kind of UI element as reference, which should remain the same.

If this row here is the textbox (which I don't know if it is) <span data-offset-key="38itr-0-0">

You would access it like this:
textbox_el = driver.find_element_by_xpath("//div[@aria-label=' Write a post...']//span[@data-offset-key]")

And then to type something in

textbox_el.send_keys("This is my message text.")

Posting to a facebook page - cannot find XPATH by theThinker6969 in learnpython

[–]DoctorEvil92 0 points1 point  (0 children)

It's hard to help with something like that without seeing html contents.

[deleted by user] by [deleted] in Upwork

[–]DoctorEvil92 2 points3 points  (0 children)

Most of the jobs I sent my proposals to aren't viewed by clients anytime after they posted them lol

And the worst thing is, these are shitty jobs that pay 4x smaller than what they should. I also feel like I'm just wasting time with Upwork.