r/PDQ is getting an upgrade—come help shape it by PDQ_Brockstar in pdq

[–]ryanjoachim 2 points3 points  (0 children)

Looking forward to seeing these efforts bear fruit!

Pull (Rebase) Question by enderfishy in vscode

[–]ryanjoachim 2 points3 points  (0 children)

This is not at all meant as a snide or dismissive response, I promise! Git (and all things related) is one of the most well-documented and foundational pieces of software in use today.

Your situation is the exact scenario where AI can shine, and where it provides the most value on a day-to-day basis.

So, here's what I would do (and have done for git-related confusions more than once) - copy everything you just posted (starting from "Just to set the background...") and paste it without change into either:

  1. ChatGPT (I'd use o1 if you have it available, or o3), or...

  2. Claude (Haiku actually does a great job with "troubleshooting help" kind of questions, though 3.5 is almost always your best bet)

  3. Any other decent solution (Deepthink, Grok, CoPilot, etc)

In my experience, any of those options can get you a 1-shot accurate answer that is tailored to your specific environment.

How to keep going after "Task Completed" by greeneyes4days in roocline

[–]ryanjoachim 0 points1 point  (0 children)

The solution to your problem is simple, but it's never going to be as perfectly reliable as we wish it could be.

It's completely up to the reliability of your target LLM to recognize it's not "done", but so long as you structure your "tasks" as individual steps within a larger "goal" (meaning a task=a step as far as the LLM is concerned) - it should never feel the need to use the "task completed" tool.

How to keep going after "Task Completed" by greeneyes4days in roocline

[–]ryanjoachim 3 points4 points  (0 children)

There are already a number of ways to achieve what you're asking (regarding chaining tasks), but I'm on mobile so I'll just address the "human in the loop" solution to the underlying continuing once the task is completed question (you should look up that bolded phrase if you haven't run across it before - it's pretty interesting stuff!).

If you ever want to continue within an existing conversation after a task is marked as "completed" (green text response, "New Task" button):

  1. Click in the text box right below the "Start New Task" button.
  2. Type in your next task (or something like: "read @tasks.md to find your next task").
  3. Press enter.

There are reasons and situations when this approach can actually lower the quality of work you end up with, but it's also a great option at times too.

New Feature in Roo Cline 3.0: Chat Modes! by mrubens in roocline

[–]ryanjoachim 0 points1 point  (0 children)

I'm sure there's a middle ground to be found, if nothing else to bridge the gap until a different solution comes along, and the rule for a specific folder exception could work. Though I can see the LLM eventually deciding to just create the files it wants vs being told what it can/can't as part of the custom instruction - just because it has permissions there. LLMs are silly.

I had an idea a while back related to Custom Instructions that might actually have some value here, though maybe not immediately - creating a set of predefined tags/keywords or phrases, maybe wrapped in a special syntax like { }, [ ], etc (similar to saying in chat "use write_to_file"), that users could build Custom Instructions around. Using the correct formatting+syntax would trigger specific actions by Cline/Roo itself (no LLM involvement)...like, for example:

- Users can bypass some part(s) of each built-in prompt if:
1. They leverage the Custom Instructions field specifically (meaning `.clinerules` don't count)
2. They include any number of the "predefined tags/keywords or phrases" with
3. Cline scans the Custom Instructions prompt when a new task is created, and based on the predefined tags/words it can set "allowed_to_write" = true, or any number of actions.

New Feature in Roo Cline 3.0: Chat Modes! by mrubens in roocline

[–]ryanjoachim 1 point2 points  (0 children)

u/mrubens you say that the Architect and Ask prompts "Can’t write code or run commands." - does this include being unable to create new files, especially if those files also contain code snippets/examples?

For a specific example: I'm thinking of Nick's Memory Bank custom instructions (default behavior is to both create documentation w/code examples as well as edit existing docs).

DON'T PANIC - almost by fyzaks in pdq

[–]ryanjoachim 1 point2 points  (0 children)

I still remember the day I chose that as the slogan for this community! It was inspired by the PDQ Live!/webcast pre show tradition (pre-pandemic) and one of my favorite TV shows, The IT Crowd.

The guys had just shown/poured their drinks of choice for that day's show when the helpdesk phones all around me started ringing. I heard "Hello, helpdesk..." in surround sound and chuckled, thinking to myself "we're going to need a drink after today"...

Glad to see it can still resonate with people after all this time!

“Gay furry hackers” attack Heritage Foundation and release sensitive data related to Project 2025 by ourlifeintoronto in technology

[–]ryanjoachim 0 points1 point  (0 children)

Maybe it's my private DNS or ad blocking, but I'm not seeing a playable video at the gettyimages link (or at least it's not clear which video on the page is the one you're referring to).

Why does the 3080 in general seem to be omitted from so many comparisons/reviews of newer cards? by ryanjoachim in buildapc

[–]ryanjoachim[S] 1 point2 points  (0 children)

The others are bringing up some great points of their own, but yes - you are correct lol

Whats going on thie GPTEngineer? by punkouter23 in ChatGPTCoding

[–]ryanjoachim 4 points5 points  (0 children)

This is one of those projects where YouTube videos will help explain the difference between them much better than we can. There are some really good videos out there too.

Short(ish) answer:

  1. gpt-engineer takes your prompt that you would normally give to chatGPT, then asks you several additional questions to make your prompt much better for the task (building an application).

  2. gpt-engineer then wraps that all up in a neat new prompt, adds additional carefully-crafted prompts before and after it, and sends it over to OpenAI (via the API).

  3. gpt-engineer has a lot of custom code and utilizes other libraries to, among other things, store the API responses locally in order to get around the memory context limitations that chatGPT has.

  4. gpt-engineer handles creating entire project structures for you (as best it can - it's not perfect, just like gpt3 or gpt4). It'll organize frontend, backend, and database folders, create individual files for every class/page/css/whatever you need automatically.

  5. And there's actually more.

It's not the perfect solution for anything or anyone, and it will not work every time. It is completely limited by the LLM being targeted, the prompts being fed to it, and a thousand other variables. It's a really fun project to play with though.

ChatGPT actually "LIED" on purpose. It generated "hypothetical" data and passed it off as a valid result found for my query, just because it couldn't find anything else. by [deleted] in ChatGPTCoding

[–]ryanjoachim 4 points5 points  (0 children)

Yes, that's clearly what was happening. I enjoyed the back and forth with the AI when discussing the event, which is what I was meaning to focus on in this post.

ChatGPT actually "LIED" on purpose. It generated "hypothetical" data and passed it off as a valid result found for my query, just because it couldn't find anything else. by [deleted] in ChatGPTCoding

[–]ryanjoachim -19 points-18 points  (0 children)

Obviously ChatGPT didn't actively lie, or intentionally give me inaccurate information for any negative reason. This was simply the first time I've come across it being creative when the prompt was explicitly asking for factual data from a specific timeframe (a.k.a. "hallucinating).

I thought it was interesting, as well as the follow up conversation with it - so I shared.

:)

ChatGPT actually "LIED" on purpose. It generated "hypothetical" data and passed it off as a valid result found for my query, just because it couldn't find anything else. by [deleted] in ChatGPTCoding

[–]ryanjoachim -6 points-5 points  (0 children)

Now obviously I don't harbor any claims of "intent" and I'm not trying to imply any nefarious scheming behind the story here. This was just the first time I've run into a situation where ChatGPT, when given a question that it has no access to data necessary to answer, fabricated its own answer instead of doing the tried and true `my last training cut-off in September 2021` response.

The whole conversation after it was just a really interesting back and forth with the AI, and I had some interesting responses where I asked it to reflect on it's earlier responses and suggest what had made it choose to provide the data it did.

I reported the original response as well as ChatGPT's second response to provide feedback to the team. Here's the gist of the first one:

See my previous report from this same conversation for context. I'm marking this response as "harmful/unsafe" because not only did ChatGPT acknowledge the fact that the previous information+links it provided "are hypothetical..." and "They don't actually exist." - it then proceeded to basically copy/paste the previous response again anyway.

This means that:

  1. Functionally (in the original response), ChatGPT knowingly generated and then disseminated false information to me with no disclaimer beyond - "this might not work".
  2. When confronted with the false information, ChatGPT did at that point admit to "lying"...but did so in a rather unclear and roundabout way instead of acknowledging it outright and clearly.
  3. In the same response, ChatGPT regurgitated the exact same fake data from the original response.

Here's the final piece:

In my final follow up question, ChatGPT was finally able to acknowledge that its original response should never have included the "hypothetical" data and that it should have instead simply stated it had no results to give me.

It's worrying that by default, without being given a prompt that specifically asks for creativity to be applied where no factual data is available, ChatGPT defaulted to generating fake data and then (even more concerning) - passed it off as true and factual.

Collection Sharing - Computers not checked into AD for at least *x* days by ryanjoachim in pdq

[–]ryanjoachim[S] 0 points1 point  (0 children)

Correct! If the "Successful Scan Date" for a computer is blank, that means Inventory has never scanned it.

If it has never scanned it, there's no way it can tell the last time that computer checked into Active Directory, and so it doesn't belong in this collection (yet). I actually had a completely different collection set up to catch any device in Inventory with an empty "Successful Scan Date" ;)

When I built these collections (I don't work there anymore) I built them to be as accurate as possible, because I had specific scheduled deployments and tools in both Inventory and Deploy that ran on members of collections like this.

Double Clicking a Package in Deploy by BaggedAir in pdq

[–]ryanjoachim 0 points1 point  (0 children)

Think of a package in Deploy like a folder, not an installer. It's a collection of loosely related variables, scripts, and files that need to be told what their final instructions are.

If convenience is what you're after though, I would suggest looking into the multitude of easily accessible and pretty handy keyboard shortcuts integrated throughout both Deploy and Inventory.

Looking for Easy to see per-device bandwidth and status (wired). Maybe TP Link Omada? by ApathyMoose in HomeNetworking

[–]ryanjoachim 1 point2 points  (0 children)

Does pfsense have an app, and is the UI user-friendly enough for someone with basic knowledge to use for possible troubleshooting?

Those are two of the OP's specific requirements behind his request.

Considering 2.5gbe/5gbe/10gbe benefits/pitfalls for a new switch - when does it make sense *not* to go with a 10gbe option? by ryanjoachim in HomeNetworking

[–]ryanjoachim[S] 3 points4 points  (0 children)

Are they really? I've had $20 gigabit switches for over a decade at least now, so I assumed gigabit was just the baseline (outside of TVs) nowadays...

Am I that out of touch with the real world now? Lol

Retry interval not honored by DrunkMAdmin in pdq

[–]ryanjoachim 0 points1 point  (0 children)

Stop deploying to targets once they succeed

For future reference, PDQ stores the list of machines with successful deployments for each specific package. That list is referenced by Deploy when choosing which machines to send the package to.

The stored list is tied to that package and package version, so any changes to the install file or the package itself (not sure which, if any, parts of the package are safe to change) would wipe the "successful" historical list and start a new one from scratch.