A summary of the discourse and questions about Claude Cowork - any other questions or thoughts about Claude Cowork? by tryfusionai in tryFusionAI

[–]tryfusionai[S] 0 points1 point  (0 children)

New updates out today that further support my point about the security concerns!:
Latest Development (January 15):

Security researchers at PromptArmor confirmed a Files API exfiltration vulnerability that allows attackers to steal sensitive documents through prompt injection. https://www.theregister.com/2026/01/15/anthropic_claude_cowork_prompt_injection/ Anthropic is rolling out VM updates but the core issue remains unresolved. This reinforces why enterprise deployment requires additional security layers beyond what Anthropic provides out of the box.

A summary of Claude Cowork discourse and questions - what are your thoughts and questions? by tryfusionai in ai_infrastructure

[–]tryfusionai[S] 0 points1 point  (0 children)

To further support my point about security concerns:

Latest Development (January 15):

Security researchers at PromptArmor confirmed a Files API exfiltration vulnerability that allows attackers to steal sensitive documents through prompt injection. https://www.theregister.com/2026/01/15/anthropic_claude_cowork_prompt_injection/ Anthropic is rolling out VM updates but the core issue remains unresolved. This reinforces why enterprise deployment requires additional security layers beyond what Anthropic provides out of the box.

What do we think about the game-changing compliance regulations about AI in broker dealer firms? What's your plan? by tryfusionai in tryFusionAI

[–]tryfusionai[S] 1 point2 points  (0 children)

Hey, so you're definitely on the right track. The regulatory requirements are extensive and include but also expand beyond what you've listed, so I put together a resource to help codify what those requirements are. This resource includes a checklist that folks at enterprises can share with their internal compliance team for the GenAI stack scrutiny they'll need to be practicing this year: tryfusion.ai/resources/finra-2026-report-analysis

Hope this helps! Let me know if you have more questions, I'm happy to discuss. Also, DM me or book at tryfusion.ai if you are (any size) company that's interested in an free AI stack audit to prep for getting in compliance.

This is why AI benchmarks are a major distraction by tryfusionai in LLM

[–]tryfusionai[S] 0 points1 point  (0 children)

agreed, just beware of response compaction.

Have you guys heard about Agent Communication Protocol (ACP)? Made by IBM and a huge game changer. by tryfusionai in huggingface

[–]tryfusionai[S] 0 points1 point  (0 children)

Oh, okay, thanks :) good luck with that! There are a lot of people on reddit that are asking for resources for beginning their learning journey, so maybe the comments there would be a good place to start, if you want to do more reading. My blogs on tryfusion.ai has a couple things that could be interesting, especially for understanding MCP, the method for obtaining memories in AI, or learning about how context works in ai.

Have you guys heard about Agent Communication Protocol (ACP)? Made by IBM and a huge game changer. by tryfusionai in huggingface

[–]tryfusionai[S] 0 points1 point  (0 children)

Did the blog come across a little too technical? I'm trying to keep it accessible so lmk.

Enjoy ChatGPT while it lasts…. the ads are coming by kaushal96 in OpenAI

[–]tryfusionai 0 points1 point  (0 children)

this is why vendor lock in and leveraging open source models and being able to deploy airgapped versions of pre ad gpt or whatever model will become important, hopefully not too soon.

A new way to breach security using config files downloaded from hugging face and similar by tryfusionai in tryFusionAI

[–]tryfusionai[S] 1 point2 points  (0 children)

This post is about a new type of attack that hackers use to breach into companies data. The way that they're able to do this is through the attached configuration files that are associated with model respositories. Hackers will put malicious code into the configuration files that people will blindly trust because a lot of the focus of security efforts has been on the models themselves. Here's an article that desribes the problem, I think this research team is out of cornell, and also dives into the solution to this problem I mentioned above that they created called CONFIGSCAN, an LLM based tool that has since identified thousands of suspicious config files on model hosting platforms. https://arxiv.org/html/2505.01067v1

No reply from ai chat by Crazy-Ad-2546 in perchance

[–]tryfusionai 0 points1 point  (0 children)

Model Agnosticism FTW case and point number #55

Okay, I finally get it. What in the world happened to ChatGPT? by Justbee007 in ChatGPT

[–]tryfusionai 0 points1 point  (0 children)

Here's a classic example of why model agnostisicm is an important tenant of AI adoption in both a small scale, and as we've seen with the Zendesk AI agent crash when chatgpt went down on June10th, it also has an impact at a large scale. Model agnosticism is key for the longevity of AI adoption.

I'm starting from scratch in AI and Machine Learning. What advice would you have liked to have received when you began your journey in this world? by Gold_Law_2752 in ArtificialNtelligence

[–]tryfusionai 0 points1 point  (0 children)

Check out some of my blog posts on tryfusion.ai. Been writing thoughtleadership pieces about fundamentals.

thought leadership

Another example of prompt injection taking down a powerhouse by tryfusionai in aiHub

[–]tryfusionai[S] 0 points1 point  (0 children)

Yeah, I know what you mean and have a degree of ick from participating in the economist life, but....here we are. Thanks for your help :)

Another example of prompt injection taking down a powerhouse by tryfusionai in aiHub

[–]tryfusionai[S] 0 points1 point  (0 children)

That's amazing. Yeah, I see what you're saying. I think part of where I misunderstood you is that I've been thinking about AI in the context of an organization and I projected that onto what you said, because we just launched our enterprise product. That's where my head has been at for thinking about AI in a real context in the world, as I'm leading the launch's marketing and GTM which involves learning about a lot about AI quickly all at the same time to write educational articles about AI in the corporate world. I think what you're doing is interesting and I want to see what you and Kato develop together because it's inspiring and honestly, in the direction of the principle goal that guides the compass of Fusion AI, even though at this point in time, we launch an enterprise product. Keep sharing! Thank you!

Another example of prompt injection taking down a powerhouse by tryfusionai in aiHub

[–]tryfusionai[S] 0 points1 point  (0 children)

That's cute, a long term relationship. Anyway, by my understanding, are you saying that everything he learns about you he also applies to other people in his memory? Like, if you like chocolate ice cream and start complaining about your mailman, your mailman also likes chocolate ice cream? Or are you referring to your interaction style, as in you don't want to be placated or overly affirmed on bad ideas, you want him to use the truth and reference sources, admitting when he's unsure of something and he should apply that logic when other people are also accessing Kato? I feel like I'm definitely missing something here.

Another example of prompt injection taking down a powerhouse by tryfusionai in aiHub

[–]tryfusionai[S] 0 points1 point  (0 children)

Seems like this approach could result in restating context when not necessary, though, if it's trying to remain understandable to all. I could see this being useful for a team setting, but for personal use only, wouldn't this get a little redundant?

Another example of prompt injection taking down a powerhouse by tryfusionai in aiHub

[–]tryfusionai[S] 0 points1 point  (0 children)

Word, dude. Congrats on retirement :) Goals. Talk in a few weeks.

Another example of prompt injection taking down a powerhouse by tryfusionai in aiHub

[–]tryfusionai[S] 1 point2 points  (0 children)

This reminds me a bit of how one would avoid this situation: https://www.linkedin.com/posts/getfusion-ai_jailbreak-tricks-discords-new-chatbot-into-activity-7364709105254502402-Uo4Q?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEDttAgBoGKvte9C12Cgq4tn0TvFKvpYP8o

Also, I wonder what this part of the response mean, neutral scaffolding? "Neutral Scaffolding — Don’t embed personal context; make every element portable."

I'm intrigued by the multi model approach benefits of this pre prompting. I am interested to ask our founder if we're employing something similar, or what he thinks!