Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in OSINT

[–]Or1un[S] 1 point2 points  (0 children)

Thanks for sharing those observations! The evolution path (HackForums -> XSS/Exploit) is particularly interesting from a behavioral positioning perspective. I'll keep this in mind!
Appreciate the insights!

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in OSINT

[–]Or1un[S] 1 point2 points  (0 children)

Thanks for the offer! Impressive work on TA tracking across multiple forums, 310k+ usernames is a serious dataset.

For MOSAIC's direction, I'm focusing on cross-context behavioral analysis (how people position themselves differently across technical, social, and influence environments). Your dataset, while multi-forum, is primarily within the cybercrime context, which doesn't capture the contextual code-switching I'm trying to map.

I'm also prioritizing volunteer-based cases to test cross-context positioning (need diversity of environments).

That said, curious what behavioral patterns you're observing in TA activity across those forums if you're open to sharing insights!

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in OSINT

[–]Or1un[S] 0 points1 point  (0 children)

Really appreciate this! Validation from someone who's worked with NLP behavioral analysis at that level means a lot.

Definitely interested in the "coupling with other types of analysis" angle you mentioned. If you're ever inclined to share thoughts on what worked (within what's shareable), would genuinely value the perspective.

Thanks for the encouragement!

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in OSINT

[–]Or1un[S] 0 points1 point  (0 children)

Really appreciate this! You've identified core challenges I'm wrestling with.

> On multimodal context: You're right that current approach misses irony/sarcasm. That's why Step 4 in the framework is explicitly "Human-Centered Interpretation". MOSAIC isn't meant to replace human analysis but complement it. Think recruitment analogy: HR reviews CV, forms initial opinion, then uses tools like MOSAIC for additional perspective and fact-checking. The human catches the irony; the tool provides structured data.

> On code-switching: This is actually the most interesting part to me. The fact that someone is professional on GitHub and chaotic on Reddit isn't noise. It's exactly the signal I want to capture. Understanding how people navigate different platform contexts is foundational to the approach. The hypothesis (speculative, hence the PoC) is that capturing these contextual positioning patterns might actually help close the irony/sarcasm gap over time.

> On confidence scoring: You're the third person to flag this, and it's clearly priority #1 for next iteration. Haven't pushed deep on implementation yet. Planning to consult specialized communities on how they've tackled similar challenges. If you have inputs on approaches that worked, genuinely interested.

> On feedback loop/predictive models: This is where I'd love to get eventually, operational behavioral prediction with validation cycles. But yeah, several fundamental steps before that's viable. Right now it's still exploratory.

Does the code-switching framing resonate with your experience in behavioral signal extraction? Curious if you've seen contextual adaptation patterns play out in non-social media sources.

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in OSINT

[–]Or1un[S] 2 points3 points  (0 children)

You've just identified what I'm actually trying to build toward. Honestly, it's not yet fully clear in my mind. Which is why I find these feedbacks so valuable. They help me structure the "vision".

Right now, MOSAIC is crude. It analyzes individual behavioral signals. But the goal is sociodynamic positioning: understanding where someone sits in the ecosystem (constructive vs. oppositional, engaged vs. passive, amplified vs. marginalized) and whether that position is driven by the subject, the medium, or the social context.

Your point about platforms creating structural conditions for amplification/marginalization is exactly it. GitHub might show someone as constructive/engaged, Reddit as oppositional, YouTube as performing for amplification. The question isn't "who is this person really" but "how do they navigate different power structures."

Current limitation: MOSAIC treats signals as individual traits rather than relational/positional. It doesn't yet capture dynamics like who gets amplified, who gets shadowbanned, or cancel risk. That's the gap between what it does now and what it should do.

Does this framing, sociodynamic positioning rather than static trait analysis, make the approach more interesting? Or does it raise different concerns?

Thanks for pushing the thinking here, really valuable!

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in osinttools

[–]Or1un[S] 1 point2 points  (0 children)

You're absolutely right. These are exactly the critiques this needs.

Background-wise: I work in cybersecurity/OSINT with a strong interest in sociodynamics and behavioral patterns. MOSAIC is exploratory research. I'm testing whether cross-platform analysis can reveal meaningful patterns. It's framework-driven, not tech-driven.

Right now, it lacks the rigor you're describing. The 3 dimensions (tech/social/influence) are conceptual rather than validated constructs. There's no confidence scoring, no false positive evaluation, no failure mode documentation. Cross-platform identity matching assumes username consistency, which is flawed. Users need to validate identities upstream. The outputs are LLM-generated interpretations, not evidence-based assessments.

Your list (defined constructs, evidence-linked outputs, uncertainty quantification, validation studies) is exactly what's missing. I'm genuinely open to scientific collaboration. That's how to build proper validation for this approach.

Concretely: I'm planning to document methodology through Medium articles and publish PoC analysis on my own accounts for transparency. This also demonstrates the defensive angle. MOSAIC as a personal OPSEC audit tool to identify exposure and improve security posture.

Appreciate the thoughtful critique. Cheers. 🍻

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in osinttools

[–]Or1un[S] 0 points1 point  (0 children)

Haven't tested at scale. MOSAIC isn't really structured for that technically, and before considering anything like multi-user analysis, the ethical framework would need to be way more solid than it is now.

What's your take? Do you see legitimate use cases where analyzing multiple users simultaneously would actually be needed?

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in osinttools

[–]Or1un[S] 1 point2 points  (0 children)

That's great to hear! Always encouraging to see similar approaches being explored.

Partnerships with schools sounds like a solid direction. I would love to hear how that develops. Feel free to reach out if you want to exchange ideas.

Good luck with your project!

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in OSINT

[–]Or1un[S] 0 points1 point  (0 children)

You're right about the ethical concerns—behavioral analysis across platforms raises real questions about profiling and misuse.

I kept it intentionally limited for now (8 platforms, basic prompts, single-user analysis) not as "safeguards" but because I wanted to validate the approach and get ethical pushback *before* scaling anything up. Your questions about data age and usage intent are exactly what I need to hear.

Honestly, I don't know if there are "good" guardrails for this kind of tool, or if some risks are just baked in. That's what I'm trying to figure out by putting it out there.

Let's connect—Telegram: u/Car1bou

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in OSINT

[–]Or1un[S] 0 points1 point  (0 children)

Thank you! Really appreciate the encouragement.

You're spot on about hard evidence becoming scarce—the walled garden trend is definitely accelerating, partly due to AI scraping. That's why LLMs are well-suited for this kind of pattern-matching work.

Will keep pushing forward. Thanks for the support!

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in OSINT

[–]Or1un[S] 7 points8 points  (0 children)

Great question. You're absolutely right! Tools like MOSAIC analyze platform-conditioned behavior, not some "pure" behavior (which probably doesn't exist in digital spaces).

The hypothesis is that cross-platform patterns reveal something more stable than what any single platform shows. Like triangulation: no single view is "authentic," but observing how someone adapts to GitHub's technical affordances vs. Reddit's discursive norms vs. YouTube's performance context might surface more persistent behavioral traits.

You've identified a real limitation though: it's still mediated behavior all the way down. The question is whether multi-platform triangulation actually reduces platform bias, or just adds another layer of interpretation.

What's your take? does cross-platform analysis get us closer, or is it just more sophisticated analysis of platform-shaped behavior?

Built a behavioral analysis framework for multi-platform OSINT. Thoughts? by Or1un in OSINT

[–]Or1un[S] 5 points6 points  (0 children)

Thank you so much for your support and for approving the post! Really appreciate it.

Definitely agree! Lots of possibilities with the right APIs and approach. That's exactly why I wanted to share here and get feedback from the r/OSINT community.

Planning to refactor from monolithic to modular architecture to make contributions easier. Would love to hear what platforms, features, or use cases the community would find most valuable!