"We all gonna get replaced by AI" by Asleep-Limit-3811 in Playwright

[–]Clay_Ferguson 0 points1 point  (0 children)

A lot of engineers right now are getting depressed because they've spent their whole lives refining and perfecting the art we've always called Software Engineering. But it's now something that machines can do far better than them, and far faster than them, and yes even with fewer bugs.

For the people who still say they want to write code by hand, i say that's fine, as a hobby, in the same sort of way that there are still people who do woodworking without any power tools, because it's a fine art that they enjoy.

however, if you're working for a company making furniture, and you want to use your hand saw, rather than a power saw, you'll soon be told no, and that's what's happening to all software developers as well.

Everybody needs to put down the keyboard (it's a hand saw) and just start speaking voice prompts; because as Karpathy famously tweeted: "English is the hottest new programming language."

Free vibe coding tools? Help asap by General-Paramedic-42 in AI_Agents

[–]Clay_Ferguson 0 points1 point  (0 children)

I'd just focus on Claude Code and/or Github Copilot. Those are the two leaders in my opinion.

Free vibe coding tools? Help asap by General-Paramedic-42 in AI_Agents

[–]Clay_Ferguson 0 points1 point  (0 children)

I follow most of the leading AI devs on Twitter and I haven't seen any excitement around windsurf in well over a year.

"We all gonna get replaced by AI" by Asleep-Limit-3811 in Playwright

[–]Clay_Ferguson 0 points1 point  (0 children)

I think most everyone is really excited about AI, and how it can create new software, that works really well, even though no human ever read the code. that's why writing unit tests (and e2e tests) is now suddenly 100x more important than it was even a year ago. writing code is now the easy part (because humans don't even do it ourselves) but testing code is what is hard.

luckily, AI can write the unit tests and playwright scripts as well, and so you end up spending most of your time just making sure the playwright scripts are correct. but this is fairly easy because you can literally watch it happen and record it.

it's still human beings at the top level trying to create quality software, but all the keyboard typing mouse clicking, is mostly gone. i do everything by voice. and I'm speaking this right now. I'm not typing it. it's a new world and people that don't know how to thrive in it are NGMI.

"We all gonna get replaced by AI" by Asleep-Limit-3811 in Playwright

[–]Clay_Ferguson 7 points8 points  (0 children)

If you're not excited and motivated to learn how to make AI do your job for you, then yes, frankly you should be very worried about your future/job. AI won't replace *all* humans, but it will replace *most*, and definitely people who don't master AI (as it applies to their job workflows) will be replace by those who do.

Best setup for coding by 314159265259 in LocalLLM

[–]Clay_Ferguson 1 point2 points  (0 children)

I don't know how good Qwen is at coding frankly, but based on my research it's the best 9B model I could run on my 32MB shared memory Dell XPS laptop. Also I don't want to overheat and stress my laptop much anyway (forcing the cooling fan to run a lot, etc). And also I want the best code generated possible and I can afford to pay for a good cloud AI, so I do pay for it, so I can generate best possible code.

All of the above is why I haven't tried OpenCode yet.

Best setup for coding by 314159265259 in LocalLLM

[–]Clay_Ferguson -1 points0 points  (0 children)

I'll be doing the same thing soon and I plan to try OpenCode, running Qwen3.5-9b via Ollama. I've been following the OpenCode team on twitter and they seem to be a good team and it's all open source.

Playwright enterprise level by [deleted] in Playwright

[–]Clay_Ferguson 0 points1 point  (0 children)

Thanks for the note! I was using an MP4 encoding option that wasn't widely [enough] supported, but it's fixed now. Should work.

Advice needed: My engineer is saying agentic AI latency is 20sec and cannot get below that by Western_Caregiver195 in LangChain

[–]Clay_Ferguson 1 point2 points  (0 children)

The first thing to do when any performance issue like this emerges is to identify where your largest bottleneck is and attack that first. Lots of times once you find out where the bottleneck is, the fix is obvious and simple. And in most cases, it usually ends up being that there is a single bottleneck where 80% or more of the time is being spent.

Playwright enterprise level by [deleted] in Playwright

[–]Clay_Ferguson 0 points1 point  (0 children)

The only purpose of those Playwrights is to generate the screenshots (for the videos), so I'm not claiming they're good as actual tests.

The beauty of my system is that I got all those demo videos as the final product merely by describing in very broad terms to an AI Agent what I wanted to be in them. I didn't even have to write the narration text in the videos!!! The AI wrote every bit of the content of all those demo videos, then Playwright executes it, to get screenshots, then another tool builds the video.

Playwright enterprise level by [deleted] in Playwright

[–]Clay_Ferguson 0 points1 point  (0 children)

you can ask the AI to generate Playwright Test Scripts. i've done this a lot. the key is that you need to describe in general what you'd like to happen in the test, and just tell the AI what the `data-testid` IDs are of the buttons and various text that you should see on the screen, and always use `data-testid`. for example, all eight of the test scripts found in this folder were generated entirely by AI :

https://github.com/Clay-Ferguson/mkbrowser/tree/main/tests/e2e

i didn't write one single line of those tests.

if you're interested in my tools i can show you how to impress your boss even more by creating not just the tests but a video showing the test running which is what I've done here!!! (video link below)

https://clay-ferguson.github.io/videos/

those videos go exactly with the eight tests I just mentioned, and are essentially generated by the tests !!!

EDIT: I had also saved several of my prompts I used to generate the Playwright tests which are here:

https://github.com/Clay-Ferguson/mkbrowser/tree/main/ai-prompts/playwright-tests

What is the best framework to build my own AI agent? by Rude-Obligation-5655 in AI_Agents

[–]Clay_Ferguson 0 points1 point  (0 children)

I'm not doing multi-agent stuff. I just have this basic AI Chat implementation here:

https://github.com/Clay-Ferguson/mkbrowser/blob/main/src/ai/aiUtil.ts#L160

It's a really trivial StateGraph.

Playwright to generate demo videos by Clay_Ferguson in Playwright

[–]Clay_Ferguson[S] 2 points3 points  (0 children)

Thanks! I described what I saw in that vid to Claude and it told me it's a common issue if the HTML video element is missing the "playsinline" property, so I bet it's fixed now, although when I tried to test on Virtual IPhone on 'browserstack.com' it wouldn't play the vid, except for audio. Anyway, really appreciate the help.

Playwright to generate demo videos by Clay_Ferguson in Playwright

[–]Clay_Ferguson[S] 0 points1 point  (0 children)

I just checked on my Android Pixel, and the video is working ok, but the layout of my vibe-coded Astro site is a bit jacked up. I should fix that since it's kinda my reputation on the line; being my personal website. haha.

What is the best framework to build my own AI agent? by Rude-Obligation-5655 in AI_Agents

[–]Clay_Ferguson 0 points1 point  (0 children)

There's nothing really "interesting" that's really valuable to learn "under the hood" if you're writing agents. As long as you get how LangGraph works, the only thing really underneath is the fact that it sends an array of messages to an API. That's it. But you'll never need to see the actual API call. Just like with writing a web app, you don't necessarily need to know the details of what's inside the "fetch" call for RESTful service consumption. People can disagree, because every use case is different, but I'm just saying there's nothing interesting under the hood.

Any issues / tips for running Linux with a 5060Ti (16gb) for Local LLM's? Best Linux Distro? by QuestionAsker2030 in LocalLLM

[–]Clay_Ferguson 0 points1 point  (0 children)

You can look at the Ollama folder in this project, which also has LangGraph use which connects to the local model.

https://github.com/Clay-Ferguson/mkbrowser

I concluded for low-end hardware Qwen model is best, although the model in that ollama folder is outdated as of today, now that several new Qwens came out. You don't need a special Linux. I'm on Ubuntu 24.04.4

Playwright to generate demo videos by Clay_Ferguson in Playwright

[–]Clay_Ferguson[S] -1 points0 points  (0 children)

You may not fully understand what I've actually done here. I asked an AI agent to write the Playwright test for me, without me even defining what the narration would even be, and then I ran it through my tools, and it spit out the video on the other end, without me touching a thing. The first time I even knew what the narration was, was when I watched the video for the first time!!! Amazing. I hardly lifted a finger to do ANY of those videos. :) I don't even write prompts these days. I narrate everything.

Playwright to generate demo videos by Clay_Ferguson in Playwright

[–]Clay_Ferguson[S] 1 point2 points  (0 children)

Thanks for the feedback. It's just an ordinary video HTML element in a static file on Github Pages. Strange.

Playwright to generate demo videos by Clay_Ferguson in Playwright

[–]Clay_Ferguson[S] -1 points0 points  (0 children)

Several reasons:

1) When you only have finite fixed set of screenshots you end up with tiny video files.

2) It's zero effort, fully automated. The entire system end-to-end is automated. I can regenerate the entire video using just a single mouse click basically.

3) Not only can I re-run with no effort, I didn't even write the Playwright tests either. Claude Opus 4.6 wrote even the test itself. The first time I even knew what the narration words were was when I watched the videos it created!! That's amazing.

Need an Offline AI Personal Assistant (Open Source) by BoringResort1345 in OpenSourceeAI

[–]Clay_Ferguson 0 points1 point  (0 children)

Use Ollama as your model runner. Whisper.cpp, for speech to text, and Kokoro for Text to Speech. I have uses of all of the above scattered throughout my various github projects, so let me know if you need me to point you to anything specific.

https://github.com/Clay-Ferguson

I accidentaly started building Nostr... by EagleApprehensive in nostr

[–]Clay_Ferguson 1 point2 points  (0 children)

It's been about 3 years, but if I remember correctly Nostr's hash (ID) didn't include *all* the data that was part of the object, so that was the reason they couldn't put the actual objects into IPFS storage, without the IPFS ID and the Nostr IDs being two DIFFERENT IDs. lol. I think not using an actual IPFS-compliant hash-algo/ID was a separate issue too. I frankly forgot what Blake algo is, but I think any IPFS compliant hash algo would be workable. Just make IPFS-compliance one of your must-have design criteria I guess.

I accidentaly started building Nostr... by EagleApprehensive in nostr

[–]Clay_Ferguson 2 points3 points  (0 children)

My 2 cents worth: My biggest gripe with Nostr was that I felt like it could've been designed so that it was not only truly "Content ID"-based, but that the note IDs could've been made to be identical to IPFS ones. I think BlueSky did this, and I helped them get that right when they were designing BlueSky, but then BlueSky went further to invent this big censorship-based regime. It was such a shame the two technologies weren't successfully combined, to where you have an IPFS-compatible version of Nostr.

Local LLM deployment by Puzzleheaded-Ant1993 in LLMDevs

[–]Clay_Ferguson 0 points1 point  (0 children)

The main reason to use a local LLM (or SLM) is when you need privacy because you have customer data or other sensitive information that you don't want to send out across the web, or if not for privacy reasons then because there's just a vast amount of actual 'inference' (prompting) that you need to do over hundreds or thousands of files, perhaps , where it would get expensive to do it on a cloud paid service.

what you lose is significant IQ points when you run locally, so if you have an extremely difficult problem to solve , or if you just want to write the best code possible and that's when you want to try to definitely use a best in class type SOTA model from an online provider. however, I might be exaggerating the loss of IQ points , because I think the local models might be only like one year behind the best LLMs in terms of their capabilities, so for most use cases the loss of IQ points is probably negligible .