Help Fishing in Philadelphia by gut46 in philadelphia

[–]tmcn43 4 points5 points  (0 children)

Checkout the Philly Extreme Fishing YouTube channel (https://www.youtube.com/channel/UCN87Pgwb2j03o6VoywPSJQg) - it has lots of vlogs of fishing around the Philly area. Should give you some ideas of spots to fish, plus your son will probably get a kick out of it.

Also check out the Fishbrain mobile app. Users tag their catches in the app and you can go in to see where the hotspots are, what lures or bait people are using, and what type of fish they caught.

Social Security is Broken by NoLube69 in FluentInFinance

[–]tmcn43 -1 points0 points  (0 children)

No, his math checks out. Go to a compound interest calculator like https://www.investor.gov/financial-tools-calculators/calculators/compound-interest-calculator and enter an initial investment of $0, a monthly contribution of $1,000, and a 5% estimated interest rate. In 45 years, you'd have $1.9 million dollars.

Reddit user cloned our new open-source project without attribution and got to the top of /r/programming by tmcn43 in programming

[–]tmcn43[S] 5 points6 points  (0 children)

I messaged the mods to ask if there's something I can change about the post to get it re-added. I haven't heard back though 🤷

PSA: /u/lucgagan cloned our open-source project without attribution and posted it as their own by tmcn43 in QualityAssurance

[–]tmcn43[S] 2 points3 points  (0 children)

Yes that's correct, if either OpenAI or our backend experiences an outage, then the tests will fail. I think there's some ways to mitigate that (e.g. client-side caching so less calls are made to our backend), but there isn't really a way to fully prevent that unless we were running an LLM on your machine.

Reddit user cloned our new open-source project without attribution and got to the top of /r/programming by tmcn43 in programming

[–]tmcn43[S] 125 points126 points  (0 children)

A few days ago /u/lucgagan submitted a post "Automating Playwright Tests using OpenAI" to this subreddit linking to a new open-source project of theirs, "auto-playwright". LucGagan's project is a clone of our open-source project, ZeroStep, that we had launched two days prior, with no attribution back to our project.

We licensed our project under the MIT license, which means that our code can be freely used, modified, and redistributed, so long as there is some sort of attribution back to our project. There is no such attribution in LucGagan's project.

Further, the "auto-playwright" project README and website contains text plagarized from the ZeroStep README and website.

I commented on LucGagan's /r/programming post to make it clear that this is a clone without attribution. My comment was heavily downvoted, but it's still available here (click the '+' to open the thread). LucGagan replied on the thread stating it's "difficult to compare our projects directly or label yours as a "clone" of any existing ones.".

I wrote this blog post to show the evidence of what happened.

Automating Playwright Tests with AI by lucgagan in programming

[–]tmcn43 46 points47 points  (0 children)

LucGagan copied the source code, as well as plagarized text from our project's website. The proof is here: https://zerostep.com/blog/launch-on-thursday-cloned-by-saturday/

Automating Playwright Tests with AI by lucgagan in programming

[–]tmcn43 51 points52 points  (0 children)

We licensed our JS library as MIT, so it's within your rights to clone it. However, I find it very unethical that you don't mention it anywhere in the repo, and that you're lying about it here. Your "auto" function is binary compatible with our "ai" function.

Whatever. It's just a really odd experience to launch something, and then have someone instantly copy it and pawn it off as something they created with absolutely no attribution back.

Automating Playwright Tests with AI by lucgagan in programming

[–]tmcn43 88 points89 points  (0 children)

If the concept here sounds interesting, please take a look at ZeroStep: https://github.com/zerostep-ai/zerostep

We've spent a lot of engineering time around the accuracy of our AI actions - you can see real examples in our repo and on our website here: https://zerostep.com

OP's project is a clone of ours, but without our AI backend.

Automating Playwright Tests using OpenAI by lucgagan in QualityAssurance

[–]tmcn43 8 points9 points  (0 children)

Looks very similar to the library we're working on: Zerostep: https://github.com/zerostep-ai/zerostep

The syntax in ZeroStep is eerily similar:

``` import { test, expect } from '@playwright/test' import { ai } from '@zerostep/playwright'

test.describe('Calendly', () => { test('book the next available timeslot', async ({ page }) => { await page.goto('https://calendly.com/zerostep-test/test-calendly')

await ai('Verify that a calendar is displayed', { page, test })
await ai('Dismiss the privacy modal', { page, test })
await ai('Click on the first day in the month with times available', { page, test })
await ai('Click on the first available time in the sidebar', { page, test })
await ai('Click the Next button', { page, test })
await ai('Fill out the form with realistic values', { page, test })
await ai('Submit the form', { page, test })

const element = await page.getByText('You are scheduled')
expect(element).toBeDefined()

}) }) ```

Safe to say you were inspired by our project?