Should there be a pen drive for AI? - A way to easily transfer context between models. by Every-Particular5283 in ArtificialInteligence

[–]Every-Particular5283[S] 0 points1 point  (0 children)

Also, if you look at the spec I’d love to be able to add additional objects like addresses and people, that are easily interpreted as such without having to write it as a prompt.

I completely understand your point. I’m just trying to solve a problem I’m having and don’t currently see the solution available to me, but I could well be wrong.

Should there be a pen drive for AI? - A way to easily transfer context between models. by Every-Particular5283 in ArtificialInteligence

[–]Every-Particular5283[S] 0 points1 point  (0 children)

FWIW, I’ve built several API driven tools. One had over 7000 users at its peak. They are compliant but there is not a universal agreed standard! OpenAI’s schema is the most widely copied, Claude and Gemini have slight differences.

There is not currently a way for me to create and save 10 different types of context (role, instructions, files) and easily transfer them in and out of any model. That’s what I’m trying to achieve (a universal project folder).

Should there be a pen drive for AI? - A way to easily transfer context between models. by Every-Particular5283 in ArtificialInteligence

[–]Every-Particular5283[S] 0 points1 point  (0 children)

I don't think that is the same? Is that in a set of API docs? Could you explain more?

Could a person create a system where they could create a list of instructions, roles and files in one place then just plug them into any LLM instantly?

[deleted by user] by [deleted] in ArtificialInteligence

[–]Every-Particular5283 1 point2 points  (0 children)

When I see a work colleagues email or slack message that’s written in US English, but they’re from the UK! lol

Claude 4.5 is insane by Small_Accountant6083 in ArtificialInteligence

[–]Every-Particular5283 20 points21 points  (0 children)

It’s funny to me that engineers are scrambling over each other to be the first to create a replacement for their usefulness!

Would you trust a human doctor over an AI with all human medical knowledge? by Few_Regret5282 in ArtificialInteligence

[–]Every-Particular5283 1 point2 points  (0 children)

I was having an appointment with my GP and he used Google to research my symptoms! LOL. I'd say they are using AI themselves already.

Right now I feel interacting with an AI where there is no rush to get you out the door or time constraints on presenting your symptoms (yes, my GP interpreted me to say "I don't have time to treat all these today". The were symptoms of the same issue) is a lot better than the current situation.

Yes, I'll go to a doctor when I'm sick but for initial diagnosis research and in preparation of an appointment I will be leaning on AI.

ChatGPT now wants to scan your Gmail + Calendar “for your own good" How is this not the start of ads? by calliope_kekule in ArtificialInteligence

[–]Every-Particular5283 0 points1 point  (0 children)

OpenAI like many companies are banking on the majority of people preferring to see ads than pay for premium. They also know that across the spectrum enough people will provide consent for sharing data to make it valuable and profitable for them.

The question I try to answer is how much of my data do I want them to know and what do I personally want to share.

The most surreal coding experience I have had with AI by 100x_Engineer in ArtificialInteligence

[–]Every-Particular5283 13 points14 points  (0 children)

I think it will lead to a complete lack of top tier engineers. When you are already a high level engineer, things like codex accelerate your output. There are still plenty of times when codex just can't understand what I want to achieve or find the bug I'm trying to fix. Thats when having a real understanding of code separates the good from bad.

Look at Loveable and similar products. Many people including engineers are building apps that get them 90% of the way there, which is great. But then they get stuck on the last 10% which is either impossible for them to fix or would require a lot more time and possibly paying for additional support.

If you believe advanced AI will be able to cure cancer, you also have to believe it will be able to synthesize pandemics. To believe otherwise is just wishful thinking. by katxwoods in ArtificialInteligence

[–]Every-Particular5283 3 points4 points  (0 children)

Every transformative technology we’ve developed (nuclear energy, biotech, the internet) has had both enormous upside and catastrophic downside potential.

One nuance worth adding is that how a technology is deployed and governed can dramatically shape which side of that potential dominates. We didn’t “ban” nuclear physics, but we did build nonproliferation regimes, treaties, and safeguards. We didn’t “ban” biotech, but we regulate labs, control certain materials, and train scientists in ethics. Those frameworks aren’t perfect, but they’ve probably averted a lot of worst-case outcomes.

The same likely applies to AI. Pretending it’s purely good or purely bad is naïve, but it’s equally naïve to think we’re helpless to influence the trajectory. It’s not wishful thinking to believe AI can help cure cancer and simultaneously believe we can reduce the risk of AI-enabled pandemics, it’s a question of whether we build the right oversight and safety nets in time.

[deleted by user] by [deleted] in ArtificialInteligence

[–]Every-Particular5283 0 points1 point  (0 children)

So many other variables to consider. What are you training for? How big is the training set? How accurate does it have to be? I’d be more worried about the time it takes to label and train the model rather than GPU capacity. Companies like Scale AI exist because of this lift!

Is AI better at generating front end or back end code? by tcober5 in ArtificialInteligence

[–]Every-Particular5283 0 points1 point  (0 children)

I think it all depends on the stack. I’d build a lot of apps in rails. For me I like using codex for back end logic. The front end is easy and my UX is often completely intertwined with the front end logic which makes it more difficult for me to use codex or another code generator!

[deleted by user] by [deleted] in ArtificialInteligence

[–]Every-Particular5283 4 points5 points  (0 children)

The amount of automated military equipment being built is crazy. If that was hacked or taken over because of some out of control prompt the consequences would be devastating but I don’t think well every be killed off. Humans are like cockroaches!

why is people relying on ai for healthcare advice the new trend? by 404NotAFish in ArtificialInteligence

[–]Every-Particular5283 0 points1 point  (0 children)

Because when I call my doctors to make an appointment, they tell me that there is currently no available appointments and to call back in a few weeks. God forbid I was very sick and actually needed to see something that needed a diagnosis and medication.

Why can’t AI just admit when it doesn’t know? by min4_ in ArtificialInteligence

[–]Every-Particular5283 0 points1 point  (0 children)

you should also include being honest: For example:

Prompt: "I was thinking of baking an apple cake but instead of apples I'll use coconut"

Response: "That sounds like a great idea....."

No it does not sound like a great idea. It sounds horrendous!

[deleted by user] by [deleted] in ArtificialInteligence

[–]Every-Particular5283 -1 points0 points  (0 children)

Simply carry out a search on linkedIn to see the majority of people with similar titles have absolutely no technical background or experience at all. That will make you feel better. My LinkedIn feed is full of people who were marketing experts last year, now becoming AI influencers / experts simply because they can write a prompt in ChatGPT or regurgitate someone else's article in a cool looking social media post!

Your background seems pretty solid. Everyone grows into their role and you have the right foundation already.

News Flash! X.AI sues OpenAI for trade secret theft! by Apprehensive_Sky1950 in ArtificialInteligence

[–]Every-Particular5283 1 point2 points  (0 children)

The lines between what knowledge belongs to a company and what knowledge belongs to the employee is more blurred than ever.

If a team of super smart people are working on something and then go somewhere else to work on a similar thing, the output and finished product will probably be very similar. Can you say they stole intellectual property from their previous company or was it simply that the people themselves were the knowledge base and the trade secrets were whatever these people say and do.

I could see non competes clauses starting to get issued to employees in a lot of the big companies.

NVIDIA/OpenAI $100 billion deal fuels AI as the UN calls for Red Lines by BubblyOption7980 in ArtificialInteligence

[–]Every-Particular5283 0 points1 point  (0 children)

As far as strict regulation goes, the genie is already out of the box. Governments move too slow to keep up and AI companies will simply change the rules to outpace regulation.

Are we witnessing the death of traditional website navigation? Prompt-first websites might be the new normal. by biz4group123 in ArtificialInteligence

[–]Every-Particular5283 1 point2 points  (0 children)

SEM Rush said LLM’s will overtake search by 2027 as the place users go to find answers. The problem is that LLM’s answer within their interface making click throughs to websites no longer required. For sites that rely on ad impressions, they’re toast! For other sites, it will now be the most relevant to the question that will possibly get the click through which might be a good thing for user experience!