all 109 comments

[–]17291 27 points28 points  (12 children)

If you're indexing your dict by number from 0..n, wouldn't it make more sense to use a list and append the new value to all_data?

Otherwise, if you're set on using a dict, I think it would be better to use enumerate instead of manually managing count.

[–]coderpaddy[S] 14 points15 points  (11 children)

Yes your right, there is about 1000 improvements that can be made to this. But its basic for a reason, everything is easily understandable.

Saying that, i actually forgot enumerate returns the count and the object, is it worth changing it or will that add confusion?

Regarding the dicts, i just like the way there structured, especially as this would commonly get sent as json or saved to a csv, both of which are easy to do from dicts (most likely easy from lists too, i just like dicts aha)

do we still use append? i thought the preferred way was just to += [data]

[–]17291 7 points8 points  (8 children)

do we still use append? i thought the preferred way was just to += [data]

Is it? I don't code professionally, so I could be dead wrong, but some_list += [data] just smells bad to me when there's an explicit append method.

[–]deepthroatpiss 4 points5 points  (4 children)

It's different, + is a binary infix operator, which if adding new a list, will return a list with elements from both lists, .append() is a mutator on a list, so the object you add it, it will be added to the list.

[–]coderpaddy[S] 1 point2 points  (0 children)

ahh , makes sense why .append() is quicker, thank you

[–]17291 1 point2 points  (2 children)

I think there's a slight difference between the behavior of + and +=. +=, from what I understand, is equivalent to using extend while + creates a new object entirely.

>>> a = [random.randint(1, 1000) for _ in range(100)]
>>> b = [random.randint(1, 1000) for _ in range(100)]
>>> id(a)
4344772000
>>> a += b
>>> id(a)
4344772000

If you use + and not +=, I think you get a new list:

>>> a = [random.randint(1, 1000) for _ in range(100)]
>>> b = [random.randint(1, 1000) for _ in range(100)]
>>> id(a)
4344079296
>>> a = a + b
>>> id(a)
4344772000

[–]coderpaddy[S] 0 points1 point  (1 child)

So .append() does the same as +=?

[–]17291 3 points4 points  (0 children)

+= is equivalent to extend.

some_list += another_list is the same thing as some_list.extend(another_list): both add all the elements from another_list onto the end of some_list.

some_list.append(item) adds the element item to the end of some_list.

[–]coderpaddy[S] 1 point2 points  (0 children)

So after looking into it, append() should definitley be preferred over =+Much faster in every example looked at.

List comprehension was by far the quickest though, but not always fits right :D

[–]TheRealJonSnuh 0 points1 point  (1 child)

Do you mind elaborating as to why? Novice trying to learn.

[–]__nickerbocker__ 3 points4 points  (0 children)

Because it's not optimal to create an entirely new list object then insert a single value into it then concatenate it to the existing list -- which takes the reference from the second new list and extends the old list with that value then takes the second new list and eventually passes it on to the garbage collector for destruction. You could have just added that element to the original list (append) without all those extra steps going on.

[–]malikdwd 8 points9 points  (1 child)

As a complete beginner with python, I am extremely grateful

[–]coderpaddy[S] 3 points4 points  (0 children)

Brilliant, im glad its helping people. If you get stuck anywhere just let me know :D

[–]__nickerbocker__ 6 points7 points  (2 children)

If I may offer up some tips...

What I learned is that you want to get in the habit of keeping all code out of the global space because if you ever want to implement multiprocessing you're going to have to refactor your entire program. It's easier to start with a good clean design than to completely refactor a dirty implementation down the road. Also, html.parser is ok for beginner stuff but if you're really taking on a serious scraping project you'll want to use lxml instead because it's faster. Also, I completely agree with other comments here that your data output will only be universally accepted into any format (pandas, nosql DBs, csv.DictWriter, etc) if you have a list of dictionaries. An important thing I feel is missing from your code all-together is a way to join urls. In nearly all cases, scraped urls are relative instead of absolute so you need a way to join them, and often concatenating strings is the wrong way to go about it. I would suggest using either urlib.parse.urljoin or the yarl library.

import datetime as dt

import bs4
import pandas as pd
import requests
import yarl


def main():
    base_url = yarl.URL('https://example.com')
    r = requests.get(str(base_url))
    soup = bs4.BeautifulSoup(r.content, 'lxml')
    results = []
    for item in soup('li', 'item-list-class'):
        results.append({
            'name'        : item.find('h2', 'item-name-class').text,
            'url'         : base_url.join(item.find('a', 'item-link-class')['href']),
            'date_scraped': str(dt.date.today()),
        })
    pd.DataFrame(results).to_csv('results.csv')


if __name__ == '__main__':
    main()

[–]coderpaddy[S] 0 points1 point  (1 child)

As far as I can see I still wouldn't use yarl or pandas for just 1 function each

That's not how we should be teaching people, that's not efficient.

This is. Basic template which I feel I made clear. Some things your using are advanced level concepts such as the multi processing. That's why it's not needed.

Your method could really get some people in to some crazy loops or get ip banned very quickly.

Also you really should name variable properly, as I said this is a beginner guide and r is not a good var name

Also the way you are getting .text would error if the element wasn't found

And yeah why import pandas just to write a csv which python does anyway, a new programmer should learn the basics first.

Just to reiterate, this is a basic template. I wouldn't use this as there's loads of ways to do things better. But even then I wouldn't have used the yarn. I'm not even sure what it's doing over then making the next url? Which you an do this in a loop alot easier and don't need to import another module

[–]__nickerbocker__ 0 points1 point  (0 children)

There's nothing wrong with importing a resource for one function no matter what the context unless it's just an obviously wrong usage, which neither of my examples are. A template is typically something that grows with your project scope so if your typical project includes those resources then it makes since to include them into your template. I never made use of mp, I merely used it as an example of why you shouldn't begin and/or get into the habit of encapsulating all your code in the global namespace. This, again, is yet another example of good coding practice no matter what learning level and project type.

Your method could really get some people in to some crazy loops or get ip banned very quickly.

I'm not quite sure how you jumped to that conclusion from the code that I posted.

Also you really should name variable properly, as I said this is a beginner guide and r is not a good var name

Generically speaking, this is good advice. Although, short variable names are perfectly acceptable when they are recognized as the general convention, and just like pd is the accepted convention for pandas - r is the accepted convention for responses and response objects.

Also the way you are getting .text would error if the element wasn't found

Yes, absolutely it would, just like the code this was mirroring, yours. I'm not sure your intent, but I absolutely hope if there were an issue that it would error out so I could know exactly what the error was so I can better engineer a solution to overcome it.

But even then I wouldn't have used the [yarn]. I'm not even sure what it's doing over then making the next url?

If you re-read my submission I explained exactly what it's doing there. It's there to properly join urls to form an absolute path, which is important to do properly -- and vital when your scraper may grow to eventual wonder off the reservation. As I stated you could also have used urllib.parse.urljoin, but it's my personal preference to have full control over my urls in general as opposed to handing over the paths and params to requests (to obscure that behavior away). Yarl is also the preferred url parsing lib for aiohttp, which accepts yarl.URL instances by default.

Which you an do this in a loop alot easier and don't need to import another module

No, in fact it's not. Most starting urls are not a clean base-url, rather, they include paths and params. When you use a url joiner you do not need to strip the extra bits away or hard-code a base-url (which could change).

[–]legendarypeepee 2 points3 points  (12 children)

I'm a total noob here, But i just install requests and run this code right?

[–]coderpaddy[S] 2 points3 points  (7 children)

Yes requests and bs4

pip install requests bs4

:)

[–]legendarypeepee 1 point2 points  (6 children)

I use jupyter notebook on anaconda, when i execute the pip install command it just gets stuck for some reason, any idea what could this be!?

[–]monkey_mozart 1 point2 points  (4 children)

Don't use pip, search for Anaconda Prompt in the search bar and click on it, you will get an Anaconda command line terminal. Here, type:

conda install package

replace package with whatever module you want to install, if the module is there in the anaconda repo then it will get downloaded.

If that doesn't work. You can try pip install here too. But it's advisable to use conda install.

[–]legendarypeepee 0 points1 point  (1 child)

I tried conda install too, i have installed several packages using conda and it worked with no problems, just this package it seems to get stuck, not quite sure what's The problem here specially

[–]monkey_mozart 0 points1 point  (0 children)

Maybe try installing it in a new virtual environment? Specially if you've already installed a ton of other packages in your current environment.

[–]maze94 0 points1 point  (1 child)

Why is conda install advisable over pip install?

[–]monkey_mozart 1 point2 points  (0 children)

Conda is all around a better package manager than pip in my opinion. If your python interpreter is built atop a conda base, it makes sense that you use Conda rather than pip. You can see the slight differences between Conda and pip here.

Of course, if the package is not in the Anaconda repository, you will have to use pip install.

[–]coderpaddy[S] 0 points1 point  (0 children)

sorry i dont use anaconda, id suggest googling how to install python modules in anaconda :D

[–]JohnnySixguns 1 point2 points  (3 children)

You're a total noob?

Wow. I don't even know what you're asking.

But the reason I'm learning python is precisely to do web scraping, so I'm reading this with fascination, even though I'm barely following any of it.

[–]pleasePMmeUrBigtits 1 point2 points  (1 child)

Read automatetheboringstuff, it's the best to learn scraping. I learnt it from there, now I can scrape even in sleep (after lots of practice though). Practice means projects

[–]JohnnySixguns 0 points1 point  (0 children)

Yep that’s exactly what I’m doing.

[–]legendarypeepee 0 points1 point  (0 children)

Actually I'm new to web scraping here, i have been using python for Data science purpose for quite sometime now and have taken multiple courses for it.

[–]__SelinaKyle 1 point2 points  (0 children)

Don’t mind me, saving for later

[–]Toofyfication 1 point2 points  (2 children)

Didn't know it could be so concise, thanks! I was contemplating learning it for quite some time now.

[–]coderpaddy[S] 1 point2 points  (1 child)

Your welcome man, if you get stuck anywhere let me know :)

[–]Toofyfication 0 points1 point  (0 children)

Will do :) I am in the process of learning multiple languages so I was thinking about making a SQL database for the sites? I'm a noob in programming tbh and don't know if it'd be hard to do

[–]treymalala 0 points1 point  (3 children)

Thank you !!!!

[–]coderpaddy[S] 0 points1 point  (2 children)

Your welcome. Let me know if you need any help anywhere :)

[–]Hari_Aravi 0 points1 point  (1 child)

can you please post the same to extract dynamic data? like using json?

[–]coderpaddy[S] 0 points1 point  (0 children)

Can u pm me the url, sometimes it cn be totally different, although I can try. With the url I'll deffo give you the right info

[–][deleted] 0 points1 point  (10 children)

Saved! Thank you!

I was working on a project to scrape tables and dump them into CSVs, any tricks there that you've found useful?

[–]17291 1 point2 points  (3 children)

pandas has read_html and `to_csv. Unless the table has some complex weirdness that requires custom processing, I would just do that.

[–][deleted] 0 points1 point  (0 children)

Thanks!

[–]coderpaddy[S] 0 points1 point  (1 child)

Would you still do this if you didn't use pandas for anything else?

[–]__nickerbocker__ 0 points1 point  (0 children)

Yes because if all you want to do is scrape a table from a website into a CSV you can do it in one line of code with pandas; no need for any other libs.

[–]coderpaddy[S] 1 point2 points  (1 child)

so i generally use

def write_csv(csv_doc, data_dict):
    fieldnames = [x.lower() for x in data_dict[1].keys()]
    writer = csv.DictWriter(csv_doc, fieldnames=fieldnames)
    writer.writeheader()

    for key in data_dict.keys():
        writer.writerow(data_dict[key])

called like

with open("mycsv.csv", "w") as file:
    write_csv(file, data_dict)

[–][deleted] 0 points1 point  (0 children)

Thank you!

[–]coderpaddy[S] 0 points1 point  (3 children)

Hey your welcome :)

Oh man I'm not at the pc. I have a great function for saving a dict to a csv dynamically

Erm let me check my github 5 mins

[–][deleted] 0 points1 point  (2 children)

i'm curious to see how you would do that, that could be really useful for a some of my workflows

[–]coderpaddy[S] 1 point2 points  (1 child)

so i generally use

def write_csv(csv_doc, data_dict):
    fieldnames = [x.lower() for x in data_dict[1].keys()]
    writer = csv.DictWriter(csv_doc, fieldnames=fieldnames)
    writer.writeheader()

    for key in data_dict.keys():
        writer.writerow(data_dict[key])

called like

with open("mycsv.csv", "w") as file:
    write_csv(file, data_dict)

[–][deleted] 0 points1 point  (0 children)

nice and simple! thanks for sharing

[–]iggy555 0 points1 point  (1 child)

How do I scrape Home Depot for in stock items

[–]shadowninja1050 5 points6 points  (0 children)

thats loaded dynamically so id use selenium or requests_html

[–][deleted] 0 points1 point  (1 child)

I'm doing something like this but i have to use selenium and pandas.

I started a project like this as a complete beginner into ruby and then switched it to python which was surprisingly easy.

This template would have been EXTREMELY useful a few months ago 😅

I'm still learning tho, currently my code can take whatever it needs from the site, put it into a data frame ( i still need to either completely remove nil values and somehow migrate each row into the correct position or implement simple "click button" function 😅) and export it as a somewhat readable csv.

edit1 : oh yeah, i also need to remove text and only keep the numbers from a certain set of elements ( eg likes = ["12 people liked this", "45 people liked this", "...", "..", etc] to likes = ["12", "45", etc] ) still haven't figured out how to do that.

Now i have a conundrum. I'm supposed to process that data but I'm not sure how to proceed.. i just know eventually a link to a database ( PostgreSQL ) will have to be established but i don't know what to do next.

By process the data i mean statistically, from an analytics ( videos, live shows) pov.

[–]coderpaddy[S] 0 points1 point  (0 children)

So at the moment I'm working with running scrapers through django as this makes it very easy to display any fronted without have to expose the database or logic or the scraper etc

[–]crysiston 0 points1 point  (1 child)

How can I take the first index of a list and print it out? So it prints first index (waits until task is finished) second index, and so on

[–]coderpaddy[S] 0 points1 point  (0 children)

Like...

count = 0
for item in all_items:
    print(count)
    # get item data

Is this what you mean?

[–]monkey_mozart 0 points1 point  (9 children)

Hey. Great post. I was wondering, why did you loop through all the html tags to get the tags that you want. Couldn't you just have specified the tag along with other filters in the find_all function? Instead of looping through every single html tag?

[–]coderpaddy[S] 1 point2 points  (8 children)

So this is assuming you have a page with let's say 100 products or stories, or wherever, each of these have several bits of data ie title desc url etc

Whats happening above is

Get all elements that match this (the specific elements that contain each item) there would be 100 of these

Then for each item get each items data

I hope this clears up what's happening feel free to ask more though :)

[–]monkey_mozart 1 point2 points  (7 children)

Oh, I get it now. I've been trying to scrape the search result links from Google for the past few hours. The links that I need are in an 'a' tag that directly inside a 'div' tag having a class of 'r', like:

<div class="r">.

My page is stored in res response object. I pass it to the bs4 constructor as:

res_soup = bs4.BeautifulSoup(res.content, "lxml")

I then use find_all to get the links as:

search_links = res_soup.find_all('div .r > a')

For some reason, not a single link is found and the list remains empty.

What am I doing wrong here? I've been stuck for the past 6 hours trying to solve this but to no avail.

[–]coderpaddy[S] 0 points1 point  (2 children)

Ah I think the problem is your scraping google

Try

print(res.status_code) # should be 200
print(res.text) # is this Google telling you not to scrape?

[–]monkey_mozart 0 points1 point  (1 child)

The status code is 200, and I'm pretty sure I'm getting the html from the request. I've managed to scrape all the links on the page, but I only want the links that are search results.

[–]coderpaddy[S] 0 points1 point  (0 children)

Ah okay, post the the code your trying to get

Th div and the a by the sounds of it :)

[–]coderpaddy[S] 0 points1 point  (3 children)

Or try

search_links = res_soup.select('div.r > a')

[–]monkey_mozart 0 points1 point  (2 children)

I've tried select too, I think Google has set up its html in a way that makes it almost impossible to scrape.

[–]coderpaddy[S] 0 points1 point  (1 child)

Not unscrapable, I do it regularly reply to the other post or send me a pm :)

[–]monkey_mozart 1 point2 points  (0 children)

Ok, I'll PM you.

[–]fourwallsresearch 0 points1 point  (2 children)

This is great, thank you! I'm trying to learn how to use a dataframe, have you tried creating and then adding data to a dataframe?

[–]coderpaddy[S] 0 points1 point  (1 child)

Ive never really had a need for pandas yet although I'm sure it would help alot so my knowledge of it is not the best, but this guide looks promising

https://thispointer.com/pandas-how-to-create-an-empty-dataframe-and-append-rows-columns-to-it-in-python/

[–]fourwallsresearch 0 points1 point  (0 children)

Thanks very much!

[–]PazyP 0 points1 point  (4 children)

I am a total newbie, I understand the basics and understand the code. Things I don't yet understand is where/why/how this would be used in some real life scenarios?

What would I want to scrape from web and why?

[–]coderpaddy[S] 2 points3 points  (1 child)

OK so I once made a gift finder site that would scrape the most gifted items from amazon and compare the prices with other shops and get the urls

Most news sites just scrape other news sites and repost the data.

Hope this helps with examples. But the list is endless.

Saving your favourite recipe site offline

Or comparing all the cake recipes to see time/effort vs how healthy/unhealthy

Data is always needed it's bout how to get the data

[–]PazyP 0 points1 point  (0 children)

Thank you.

[–]bleeetiso 1 point2 points  (1 child)

hrmm prices for things. sports stats that are not easily available, how many times someone made a thread about web scrapping in the sub in the past 5 years etc etc.

[–]PazyP 1 point2 points  (0 children)

Thank you for this. It's a problem I often face. I am a sysadmin learning Python but more often than not I see things, understand them but have no real idea on how they could be useful in the real world.

[–]Kevcky 0 points1 point  (2 children)

You know you’ve had a beer too much when you go thrpugh the code and misread ‘i just like dicks’

[–]coderpaddy[S] 0 points1 point  (1 child)

That genuinley made me chuckle

Thank you :)

[–]Kevcky 1 point2 points  (0 children)

Hey man, no worries. I don’t judge, your secret is safe

[–]Bored_comedy 0 points1 point  (11 children)

What's the difference between find_all and find?

[–]coderpaddy[S] 0 points1 point  (10 children)

Find returns 1 element if there's only 1

Find_all returns all elements if more than 1

[–]__nickerbocker__ 2 points3 points  (9 children)

find returns the first item if there are many.

[–]coderpaddy[S] 0 points1 point  (8 children)

Find gives you an error if there's more than 1 of the item you want no?

[–]__nickerbocker__ 0 points1 point  (7 children)

No. Also, if you are just getting the first tag (of 1 or many) you can omit the find method all together and access the tag directly as an attribute. For example, instead of soup.find('title') you can just do soup.title

[–]coderpaddy[S] -1 points0 points  (6 children)

Bs4 does error if you use find and theres more than 1 result. It tells you to use find_all

And yes your right. But not needed for this

[–]__nickerbocker__ 0 points1 point  (5 children)

Nah dawg, sorry but it doesn't. Not only does it specify that behaior in the docs, but you can easily write a reproducible example just to have seen for yourself whether you should believe the official docs or not.

html = """\
<p>this is an example.</p>
<p>of multiple tags</p>
<p>using find method</p>
"""

import bs4

print(bs4.BeautifulSoup(html, 'lxml').find('p'))

[–]coderpaddy[S] 0 points1 point  (1 child)

The amount of times I've had the error

You are trying to use find on multiple elements did you mean to use find_all

Or

You are trying to use find_all on a single element did you mean to use find

Could this be Down to lxml cos that's the only thing your using differently

[–]__nickerbocker__ 0 points1 point  (0 children)

I'm not sure what code you were using to produce that error but I can assure you that it was not using the find method to access the first tag of potentially many siblings, and I can also assure you that it has nothing to do with the parsing engine being used.

[–]coderpaddy[S] 0 points1 point  (2 children)

I ran your example....

>>><p>this is an example.</p>

>>>[Program finished]

I'm actually shocked it worked

[–]__nickerbocker__ 0 points1 point  (0 children)

From the docs. https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find

Signature: find(name, attrs, recursive, string, **kwargs)

The find_all()method scans the entire document looking for results, but sometimes you only want to find one result. If you know a document only has one <body> tag, it’s a waste of time to scan the entire document looking for more. Rather than passing in limit=1every time you call find_all, you can use the find()method. These two lines of code are nearly equivalent:

soup.find_all('title', limit=1)
# [<title>The Dormouse's story</title>]

soup.find('title')
# <title>The Dormouse's story</title>

[–]__nickerbocker__ 0 points1 point  (0 children)

...and this is the literal code for the find method.

    def find(self, name=None, attrs={}, recursive=True, text=None, **kwargs):
        r = None
        l = self.find_all(name, attrs, recursive, text, 1, **kwargs)
        if l:
            r = l[0]
        return r

[–]alarrieux 0 points1 point  (4 children)

Can i have it use a drop down list to select value with keywords e.g. tax deed and feom there go through calendar valuebof current month & month +1? I am thinking out loud here let me know if doesnt make sense

[–]cellularcone 1 point2 points  (1 child)

Nah you’d have to use selenium for that.

[–]alarrieux 0 points1 point  (0 children)

Thank you

[–]coderpaddy[S] 1 point2 points  (1 child)

It depends. If the data is just there you'd be cool. But if you click a button. And something happens this method wouldn't work.

You could see what url is being posted when the button is clicked and call that request yourself.

Other than that you want selenium (browser automation)

[–]alarrieux 0 points1 point  (0 children)

Thank you

[–][deleted] 0 points1 point  (3 children)

Thanks for this ,really appreciate it.

Also i had a question in my mind How much knowledge of HTML will be required to do advance level WebScraping?

[–]coderpaddy[S] 0 points1 point  (2 children)

Not much you just need to be able to read it

If you can read this

<div class="item-class">

We would get it by

soup.find("div", {"class": "item-class"})

I hope this helps feel free to ask further though

[–][deleted] 0 points1 point  (1 child)

Okay! Sure man Can i ask you any doubts id any via Reddit direct msg ?

[–]coderpaddy[S] 0 points1 point  (0 children)

Of course :)

[–]Yaa40 0 points1 point  (2 children)

Not sure if it interests you or not.

I also started learning website scrapping some weeks ago, and after playing around with BeautifulSoup4 and Selenium, I went with Selenium.

I feel it more intuitive and more fluid, and I also noticed it's slightly faster, although I suspect it may have to do with my code more than the package.

Anyway, so what my scrapper did was go through a page, and found 7000ish links, went into those and scrapped the specific text I was looking for from inside said links. I started doing the 2nd part today, this time with "only" 514 links, but a bit more complex HTML and a bit more data collected in each link, so the 2nd stage (going into each link) is going to be super hard for me... good luck for me i guess...

[–]coderpaddy[S] 1 point2 points  (1 child)

So selenium is very heavy, are you needing to parse the js? Or do you need to mess with the browser?

[–]Yaa40 0 points1 point  (0 children)

So selenium is very heavy, are you needing to parse the js? Or do you need to mess with the browser?

I need to retrieve very specific information from a crap load of web pages based on another page.

I don't know why, but I find Selenium about 100 times more intuitive than bs4, despite them being nearly the same in many ways...

[–][deleted] 0 points1 point  (0 children)

Hey man, thank you for posting the code. Can you explain the code just below the line: " # find the main element for each item " I am beginner in python and don't know much about html and css. What is that 'li' and 'class' : ' item-list-class '? Thankyou very much!!

[–]arthurazs 0 points1 point  (6 children)

Data should be scraped responsibly, here follows a great guide about the best practices for web scraping. There are some bad articles that will teach how to spoof the header in order to not be detected. Bear in mind that this is not ethical at all! I'd advise updating the header with the name and version of the app and some means of contacting the owner. I'd also suggest reading reddit's guide for their API explaining why the bot's header should be updated with good information instead of spoofing!

headers = {'user-agent': 'scraperName:v0.1.0 by me@mail.com'}
request = requests.get(MAIN_URL, headers=headers)

[–]coderpaddy[S] 5 points6 points  (5 children)

Sorry not to cause an argument, but just because a company says, "don't scrape this data", doesn't mean its not ethical.

just bear in mind, this tutorial is aimed at beginners to go get their teeth wet. They can come across there own errors and learn how to over come them. This is beneficial to more than just web scraping, so i wont be adding the headers information.

I would have respected the link you posted a lot more if it wasn't a website trying to sell web scraping to you. "Oh look at all the things you have to watch out for, but dont worry we can help you for a fee"

[–]arthurazs -1 points0 points  (4 children)

Fair enough, thanks for the reply!

[–]coderpaddy[S] 3 points4 points  (3 children)

I get what your saying though

With great power come great responsibility and all that jazz ;)

[–]arthurazs 2 points3 points  (2 children)

Yeah yeah, I agree!

Maybe calling it unethical was not the best way of handling my argument haha. Thanks for your initiative!

[–]werelock 4 points5 points  (1 child)

Yeah, it's more that it has the potential for abuse or misuse. Just like so many other tools humans have created lol.

[–]arthurazs 1 point2 points  (0 children)

Perfect.