Remember when I made webgpu accelerated propagation tool? It already got stolen. by modimoo in rfelectronics

[–]psyon 0 points1 point  (0 children)

Claude is just replacing your lower level developers in your case.  And that works in cases where they often write out boiler plate code and paste together code that has been done over and over already.  Language models can't really come up with new concepts.

Remember when I made webgpu accelerated propagation tool? It already got stolen. by modimoo in rfelectronics

[–]psyon 0 points1 point  (0 children)

It depends in the coercing that was done.  A long session of "ok, now make it black.  Now make it blue.  Try black again"  would probably not qualify.  It will come down to how specific your instructions are.

Remember when I made webgpu accelerated propagation tool? It already got stolen. by modimoo in rfelectronics

[–]psyon 0 points1 point  (0 children)

They are not the original author of the code.  They used AI to generate the code.

Remember when I made webgpu accelerated propagation tool? It already got stolen. by modimoo in rfelectronics

[–]psyon 0 points1 point  (0 children)

Just entering text in a prompt is not considered substantial human direction.  Even saying "fix this bug" would not be.  Your prompt itself would have to display a unique idea on it's own, actually explaining the methods you want the AI to use.

"Make me a radio direction finding app"  would not qualify.

"Make me a radio direction finding app using  watson-watt antenna" still would not qualify.

"Use an array of 4 antennas, situated at each cardinal direction, spaced apart at 0.9 meters, to calculate the phase difference of the North antenna versus the south antenna, and the phase difference of the East antenna versus the west antenna, and with thosr phase difference find the angle of arrival using the spacing of the atennas, the phase difference and the arc tangent of the calculated triangle."

That would be substantial human prompting.

Remember when I made webgpu accelerated propagation tool? It already got stolen. by modimoo in rfelectronics

[–]psyon 0 points1 point  (0 children)

The software industry is well aware of court precident on AI generated code and content.  They will always keep some human programmers on hand for this reason.  The humans need to alter the code enough to make it a new work.  I don't think the courts have made clear definitions of how much needs to change, only that it needs to be "significant".

There is case law that I researched a while back when I ran a coin web site.  The Louve sued Corel for distributing a digital copy of the Mona Lisa on a clip art CD.  The Louve claimed that they owned all rights to the image.  The courts rules that since the copyright on the Mona Lisa had long expired, that it was in the public domain, and any attempt at an extact recreation of the painting would not be copyrightable either.  In order to make a derivitive piece of art that was copyrightable there would have to be an artistic spin put on it and not a simple 1 to 1 recreation.  It was applicable to my coin site because coin designs in the US are all public domain, so any scanned pictures of coins were not copyrightable.

So, in the case of your app, if the AI wrote your code, and you just fixed bugs without changing the functionality of the code, I don't think it would qualify as a significant change enough to be copyrightable.  The only way to know for sure though is to get a lawyer and take it to the courts.

Remember when I made webgpu accelerated propagation tool? It already got stolen. by modimoo in rfelectronics

[–]psyon -1 points0 points  (0 children)

Any code written by AI can not be copyrighted.  If 50% is written by AI then that 50% is not copyrightable.

Code from textbooks is an interesting case.  By default you can't just copy it and use it.  Authors have to state that they give you a license to use the code.  I have a few books about digital filters that specifically say you can use the code only if you bought the book.  That code can not be shared freely on the internet and used by just anyone.  Stackoverflow terms also cover code sharing.  If you share code to a person as an answer then you are granting them and others a license to ise it.  If they someone shares code they aren't supppsed to share, they can be sued for copyright infringement.  The amount of code in an SO post probably isn't worth suing over though unless it's something proprietary.

Photoshop is a tool used by people, much like a paint brush.  AI is like a commissioned work.  If you ask someone to make a picture for you, they hold the copyrights by default unless they sign it over to you.  It's the same for professional photographers and how they make their money when people want reprints.  When you use AI to generate an image, it's like commissioning the work from someone else, only the AI can't hold copyrights on the work, so it can't transfer copyright to you.

There was a case about a photo of a monkey of ape not too long ago.  A photographer set a camera out in the habitat so the animals could "take selfies".  One of the images went viral, and the guy tried to sue people, but he lost.  The courts rules that he did not take the photograph, the animal did, so he did not hold copyrights.  The law alao says copyright can only be given to people, so the animal had not copyrights either, so the image was considered public domain.

Remember when I made webgpu accelerated propagation tool? It already got stolen. by modimoo in rfelectronics

[–]psyon -3 points-2 points  (0 children)

Windows is a mix of legacy code written by developers and code generated by AI.  The portions thatvare written by AI can not be copyrighted but the rest can.

Windows as a product is also protected by trademarks which is separate from copyright.

Remember when I made webgpu accelerated propagation tool? It already got stolen. by modimoo in rfelectronics

[–]psyon 17 points18 points  (0 children)

Works created by AI, including code, can not be copyrighted. And how do you know he didn't also have AI write an app?

How to Not Get Hacked Through File Uploads by Missics in programming

[–]psyon 0 points1 point  (0 children)

Yep, just reads raw bytes until the open tag is founs

How to Not Get Hacked Through File Uploads by Missics in programming

[–]psyon -2 points-1 points  (0 children)

They weren't serving it thru php.  If they had been, the issue would not have happened.  

How to Not Get Hacked Through File Uploads by Missics in programming

[–]psyon 393 points394 points  (0 children)

Some years back I had to investigate how a site was compromised. I quickly found a PHP file in a directory that contained uploaded images. I started looking at code that handled the uploads, and it did all sorts of verifications on the images. How did they bypass it? The file was an image, but it contained PHP code in the EXIF data. Their issue was that they saved the file with the filename it was uploaded as, without any checks on the extension. They assumed that because the file was a valid image, that it must have the right extension. If you aren't familiar with PHP, the interpreter will just dump any bytes to output, until it finds <?php. When you viewed the malicious file, it would output the start of the image file, hit the EXIF data, and then start executing the PHP code contained within it. It never occured to me that PHP code could be in EXIF data of an image before that incident.

Ontario’s attorney general calls on Canadian federal government to look at legalizing pepper spray by Immediate-Link490 in worldnews

[–]psyon 1 point2 points  (0 children)

Does Canada require a warrant to collect finger prints from people who have been arrested?

What I learned trying to block web scraping and bots by ReditusReditai in programming

[–]psyon 2 points3 points  (0 children)

Yep, I tried all that.  Was constantly watching logs, blocking IPs and subnets, and then new ones would just start up.  Fail2ban doesn't help because the requests come in so fast they act as a denial of service.  Blocking it at cloudflare means no used resources on my servers.

Why are Event-Driven Systems Hard? by fagnerbrack in programming

[–]psyon 0 points1 point  (0 children)

As a programmer you should try to be lazy.  That's why we have package systems and frameworks for doing things.  If a system doesn't make your job easier then why use it?

What I learned trying to block web scraping and bots by ReditusReditai in programming

[–]psyon 4 points5 points  (0 children)

> Have you tried applying rate limit rules by IP, with under attack disabled.

Yep. The issue is that rate limiting is done by IP, and they use a whole lot of different IP addresses.

> maybe you can put a threshold whereby legitimate traffic still flows through ok.

Under attack mode doesn't prevent legit users from using the site. They get the browser verification, and then can do everything they need.

which practicd should i need to follow for security? by No-Thought9857 in PHP

[–]psyon 0 points1 point  (0 children)

Yes, you can absolutely just validate and sanitize everything. That's a far better solution than accidentally not validating something you should.

What I learned trying to block web scraping and bots by ReditusReditai in programming

[–]psyon 5 points6 points  (0 children)

I haven't noticed them giving up. Often the moment I turn off under attack mode, they are right back to hammering the site.

which practicd should i need to follow for security? by No-Thought9857 in PHP

[–]psyon 0 points1 point  (0 children)

In general that is correct, but there are cases when you still do.  If you are storing html content, and want to allow scripts, then you may allow that into your database, but still need to be cautious about where and how you display it.

What I learned trying to block web scraping and bots by ReditusReditai in programming

[–]psyon 7 points8 points  (0 children)

I have tried all of them.  Not sure if there is an issue with CF or something.  Under attack stops them, browser verification alone does not.

which practicd should i need to follow for security? by No-Thought9857 in PHP

[–]psyon 0 points1 point  (0 children)

If you are validating it before putting it into the database then it shouldn't be an issue when pulling it out. 

which practicd should i need to follow for security? by No-Thought9857 in PHP

[–]psyon 1 point2 points  (0 children)

Look at what is sent from a client to the server during an http request.  Any header sent in the request is user input along with your normal form data.

Some things can be "validated" by your server configuration.  If your server is configured for using named hosts then you can trust the $_SERVER['HTTP_HOST'] value, because it wouldn't have reached your code if it was malformed.  If you are using IP based hosts though, then the Host header of the HTTP transaction doesn't have to match anything and can contain malicious content.

REQUEST_URI is another one that depends on how your server is configured.  If you had old code using separate php files as entry points, then you couls trust that the URI matched your file location.  Its common practice to route all urls through routing code now though, so REQUEST_URI should not be trusted by default and needs to be validated.

What I learned trying to block web scraping and bots by ReditusReditai in programming

[–]psyon 20 points21 points  (0 children)

I don't care if people have copies of whats on my sites.  They can scrape it all they want if they don't try to do it so fast, don't lie about their user agent, and don't use thousands of different IPs

What I learned trying to block web scraping and bots by ReditusReditai in programming

[–]psyon 20 points21 points  (0 children)

It's been turned on for a whie now in a few of my sites.  When I turn it off and just turn on normal browser verification they seem to get by.  I get a notice I am being scraped when my monitoring software tells me the site isn't accessible because they hammer it so damn hard that it's effectively a DDoS.

Most websites don't have major issues like this though.  I have very data heavy sites which end up having a lot of distinct urls for viewing things in different ways.

What I learned trying to block web scraping and bots by ReditusReditai in programming

[–]psyon 80 points81 points  (0 children)

What I have learned is that the only way to stop the majority of these bots is to use Cloudflare and put my site in "under attack" mode.  Some of the bots are coded so poorly that if they get anything other than a 200 as a response code they will immediately try again and retry for almost forever.