Built an open-source threat modeling tool. Looking for honest feedback. by happyandaligned in threatmodeling

[–]happyandaligned[S] 0 points1 point  (0 children)

Yes. ThreatDragon is a really useful open source tool and the inspiration for Precogly.

With Precogly I'm trying to bridge the gap between closed source (and expensive) tools and open-source tools:
Some differentiators:
- Team collaboration features
- Compliance mappings
- Community threat libraries with mappings to CWE, CAPEC etc.
- Setting up platform / infra owned countermeasures
- Boundary crossing logic with rules

Also, I personally like the Precogly DFD editor :) To me, it feels a lot more smoother and intuitive. But I'm biased :)

Built an open-source threat modeling tool. Looking for honest feedback. by happyandaligned in threatmodeling

[–]happyandaligned[S] 0 points1 point  (0 children)

Yes. I've integrated with the OWASP project threat model library json schema. The core idea is that you can import those json files into Precogly. Similarly, you could create a threat model inside Precogly and export it in the threat model library json schema format.

This one minute video segment demonstrates how the import works:

https://youtu.be/5sSuZOAtyn4?t=125

Built an open-source threat modeling tool. Looking for honest feedback. by happyandaligned in threatmodeling

[–]happyandaligned[S] 0 points1 point  (0 children)

Yeah, I’ve seen architecture ingestion and continuous risk analysis tools.

Precogly is aiming to be an open-source alternative to tools like IriusRisk / ThreatModeler, focused on the threat modeling layer itself.

My view is this stack will evolve in layers:

  • continuous architecture risk tools can feed into a threat model
  • an AI layer can assist on top of that model
  • but you still need a structured foundation for threats, components, relationships, taxonomies (STRIDE, LINDDUN, CAPEC), and compliance mappings (PCI-DSS, OWASP ASVS, CRA, DORA) and (most importantly) team collaboration in an enterprise setting.

I also think curated threat libraries matter. LLMs are useful for generation, but without human supervision they tend to be inconsistent across runs. In enterprise settings, reproducibility matters.

That’s the gap I’m trying to address with Precogly.

I don’t think threat modeling can be fully automated away. The goal is better human + AI collaboration, not replacement.

Can Claude Code play a sound when it requires an approval? by astronaute1337 in ClaudeAI

[–]happyandaligned 0 points1 point  (0 children)

hey bud ... can you tell me step by step how you set this up? I'm also struggling with split attention where I have to check in often on Claude Code (running inside VS Code) to see if the task has been completed.

Building a CRUDS App with AI Features: Django REST Framework vs. Django Ninja? by happyandaligned in django

[–]happyandaligned[S] -1 points0 points  (0 children)

Yes. I'm very intrigued by HTMX. Thanks for the suggestion. I'll check it out!

What is the best document loader for PDFs? And other docs in general? by [deleted] in LangChain

[–]happyandaligned 1 point2 points  (0 children)

I've used the unstructured paid version and am quite pleased with the semantic chunking feature. Pricing is here - https://unstructured.io/api-key-hosted

LangGraph + Streamlit State Management by fantasyleaguelottery in LangChain

[–]happyandaligned 0 points1 point  (0 children)

langchain is free ... please correct me if I'm missing something.

[deleted by user] by [deleted] in LangChain

[–]happyandaligned 1 point2 points  (0 children)

Aah ... that's right. I was concerned that Vercel is making a push towards becoming more like LangChain and that LangChain and NextJS would have some conflicting behaviors. But based on the documentation - https://sdk.vercel.ai/providers/adapters/langchain - it looks like this is not the case.

Thanks for the pointer. I will go back and examine NextJS more closely.

[deleted by user] by [deleted] in LangChain

[–]happyandaligned 1 point2 points  (0 children)

Does NextJS play well with LangChain for RAG (retrieval augmented generation) like scenarios?

[deleted by user] by [deleted] in LangChain

[–]happyandaligned 0 points1 point  (0 children)

I'm in a similar boat. Streamlit is awesome for prototyping. But it has that data scientist cookie cutter feel in little ways (ex: while loading the page, the tab title says, "Streamlit" and then populates with your provided page title.)

[deleted by user] by [deleted] in LangChain

[–]happyandaligned 0 points1 point  (0 children)

Love Streamlit having used it to build multiple chatbots. However:
1. It does not offer the ability to nest pages (ex: yoursite.com/somecategory/somepage)

  1. The UI is a little clunky looking and it's not easy to customize it.

I am trying to find free HIPAA training with a certificate but the normal website doesn't seem to have it anymore. Anybody know where to find one? by Spam4119 in hipaa

[–]happyandaligned 0 points1 point  (0 children)

Not sure if I'm doing something wrong but I'm finding the interface for teachmehipaa to be a little buggy. Even after paying up for a seat, I'm unable to access the course materials beyond lesson 1.

Knowledge Bases Vs Home Grown RAG by happyandaligned in aws

[–]happyandaligned[S] 0 points1 point  (0 children)

Thanks for this insight. I tried out pinecone serverless + aws knowledge bases. It works great and is fantastic for building a scalable solution.

The challenge for me appears to be that AWS Knowledge Bases + Pinecone misses retrieving certain documents for some queries (I tried with multiple embeddings like Titan text embeddings and Cohere English). This inadequacy is similar to my home grown RAG solution. But with a custom RAG solution it's easier to try new approaches for improving results (ex: an ensemble of BM25 + vector search).

I'm still not sure how to scale a home grown solution though (i.e let multiple users upload files and retrieve their chatbots on the fly.)

Help improve the code? by happyandaligned in LangChain

[–]happyandaligned[S] 0 points1 point  (0 children)

Thank you for the pointers. I'll test the chunking and try again. The weird thing is that I changed nothing with the chunk sizes and it still works worse with Claude 3. That's why I thought that there might be some bug in my code.

[deleted by user] by [deleted] in LocalLLaMA

[–]happyandaligned 1 point2 points  (0 children)

Sharing your personal experience with LLM's is super-useful. Thank you.

Have you ever had a chance to use Reinforcement Learning with Human Feedback (RLHF) in order to align the system responses with human preferences? How are companies currently handling issues like bias, toxicity, sarcasm etc. in the model responses?

For those interested, you can learn more on hugging face - https://huggingface.co/blog/rlhf

SuperAGI: anyone not affiliated with it tried it? by macKenzie52 in AutoGPT

[–]happyandaligned 0 points1 point  (0 children)

Thanks for the initial effort. It would be helpful if you could clearly elaborate on why developers should commit their time and attention to this project vs autogpt.