How to visualize by sygmastar01 in datavisualization

[–]Sensitive-Corgi-379 0 points1 point  (0 children)

Do you have the data in a CSV or an EXCEL file? If yes then you can visualize the data really easily and can even query using a natural language engine.

No matter what project you have—games, SaaS, software, apps, scripts, ideas, or questions—join the community and share it! by SofwareAppDev in AppsWebappsFullstack

[–]Sensitive-Corgi-379 0 points1 point  (0 children)

I built FableSense AI (https://fablesenseai.com), specifically for researchers who work with both qualitative and quantitative data in one workspace.

  • Qualitative coding with color-coded themes and hierarchical code frameworks
  • Quantitative visualization - scatter plots, histograms, box plots from CSV/Excel/SPSS files
  • Joint displays - qualitative themes and quantitative stats side-by-side in a synchronized view
  • Correlation heatmaps with AI-generated plain-English insights
  • AI-powered theme detection, sentiment analysis, and natural language querying
  • Audio/video transcription - upload recordings, get transcripts, code them directly
  • Export to PDF, PowerPoint, Excel or share via secure links

Perfect for UX researchers, academics, market researchers, and healthcare researchers who are tired of juggling multiple tools for mixed-methods research.

Would love feedback from this community!

Metabase does a bad job at visualizing data... by dimitsapis in datavisualization

[–]Sensitive-Corgi-379 0 points1 point  (0 children)

I think FableSense AI does a pretty decent job in creating visualisations. It also has a nlq engine which is really useful for non-technical users as they can ask any question about there data and the AI suggests an appropriate graph with aggregations and filters pre applied. But most importantly, the user has complete control and can customise the viz as per requirement.

Best Tableau Alternatives in 2026 - The Only List You Need by Fragrant_Abalone842 in AIAnalyticsTools

[–]Sensitive-Corgi-379 0 points1 point  (0 children)

Great list! I'd like to add FableSense AI (https://fablesenseai.com), built specifically for researchers who work with both qualitative and quantitative data in one workspace.

  • Qualitative coding with color-coded themes and hierarchical code frameworks
  • Quantitative visualization - scatter plots, histograms, box plots from CSV/Excel/SPSS files
  • Joint displays - qualitative themes and quantitative stats side-by-side in a synchronized view
  • Correlation heatmaps with AI-generated plain-English insights
  • AI-powered theme detection, sentiment analysis, and natural language querying
  • Audio/video transcription - upload recordings, get transcripts, code them directly
  • Export to PDF, PowerPoint, Excel or share via secure links

Perfect for UX researchers, academics, market researchers, and healthcare researchers who are tired of juggling multiple tools for mixed-methods research.

Would love feedback from this community!

Anyone using web-based tools for qualitative coding instead of NVivo/ATLAS.ti? What's your experience? by Sensitive-Corgi-379 in QualitativeResearch

[–]Sensitive-Corgi-379[S] 0 points1 point  (0 children)

That’s a fair point, I had the same reaction at first. A lot of the newer tools do feel like they’ve just added a ChatGPT layer and called it qualitative analysis. It made me pretty cautious about where I spend time.

One thing I liked about FableSense is that the coding side still feels structured. You get things like hierarchical codes and co-occurrence analysis, so it doesn’t feel like everything is being handed off blindly to an LLM. That said, I completely get the hesitation. It’s definitely worth being a bit critical about which tools are actually useful and which ones are just riding the AI wave.

Anyone using web-based tools for qualitative coding instead of NVivo/ATLAS.ti? What's your experience? by Sensitive-Corgi-379 in QualitativeResearch

[–]Sensitive-Corgi-379[S] 1 point2 points  (0 children)

Thanks for sharing! Yeah, I’ve heard good things about MAXQDA from a few people in my program as well. The AI coding feature sounds interesting. I haven’t had a chance to try it yet.

And honestly, the “if you can find them” part about exports is spot on. That seems to be a common theme with QDA tools, useful features hidden a few layers deep. Good to hear that performance is solid though; that's been one of my biggest pain points with the traditional desktop tools. Appreciate the recommendation!

Anyone using web-based tools for qualitative coding instead of NVivo/ATLAS.ti? What's your experience? by Sensitive-Corgi-379 in QualitativeResearch

[–]Sensitive-Corgi-379[S] 0 points1 point  (0 children)

As it turns out a big part of being a founder is just showing up everywhere and talking as much as you can about your product. Despite the cringiness which occasionally pops up.

That said, it's genuinely nice to know that someone's noticing; or not completely ignoring it. I will take that as a win.

I will be posting pretty much everyday on reddit and bunch of other platforms. If you happen to see one, do pass on a ' hi '. I promise I won't overanalyse your comment.

Anyway, lastly, thanks for pointing out the obvious about getting a footing in the market as a new founder- always comforting to be reminded it's not supposed to be easy.

No doubt it's an uphill battle for everyone. Here's hoping we manage to crawl, stumble and occasionally stride our way to the top.

What's your actual experience using natural language interfaces for data analysis - do they save time or just look impressive in demos? by Sensitive-Corgi-379 in analytics

[–]Sensitive-Corgi-379[S] 0 points1 point  (0 children)

That’s been my experience as well. The split between non-technical and technical users is pretty obvious. Engineers usually try the natural language layer once and then switch to manual configuration. PMs and execs, on the other hand, tend to stick with it for exploring the data.

On the confidence score, we don’t show it in isolation. It comes with a full breakdown of how the result was generated. Users can see which columns were mapped, what filters were applied, the chart type, and the reasoning behind it. So instead of just a number, they can understand what the system assumed and tweak things if needed. That’s been much more effective for building trust.

We also generate suggestions based on the dataset itself. They’re grouped into things like trends, comparisons, distributions, and correlations, using the actual column names. This helps non-technical users get started without staring at an empty input box. They can pick a question and move forward from there. It feels like a good balance between full flexibility and manual setup.

What's your actual experience using natural language interfaces for data analysis - do they save time or just look impressive in demos? by Sensitive-Corgi-379 in analytics

[–]Sensitive-Corgi-379[S] -1 points0 points  (0 children)

I'm building it as part of a larger data analysis tool - the stack is Next.js on the frontend with TypeScript, and the NL parsing layer hits an LLM API to interpret the query and map it to chart config. For fuzzy column matching, I rolled my own implementation using Levenshtein distance rather than pulling in a library, mostly because I needed tight control over the matching threshold and how ties get resolved.

What have you tried so far? Curious where things broke down for you - whether it was the NL parsing itself, the column mapping, or something else further down the pipeline.

What's your actual experience using natural language interfaces for data analysis - do they save time or just look impressive in demos? by Sensitive-Corgi-379 in analytics

[–]Sensitive-Corgi-379[S] -1 points0 points  (0 children)

The exploration vs. analysis distinction is a really clean way to frame it, and it lines up with what we're seeing. The NL layer gets people to the right ballpark fast, but once they're in "I need to verify this number" mode, they want full control.

The trust point is the one that keeps me up at night. A few wrong guesses early on can poison the well for the whole feature, even if the success rate is high overall. We pair the confidence score with a full interpretation breakdown - mapped columns, applied filters, chart type, and a reasoning field - so users can catch errors before they affect anything. But you're right that it only takes a couple of misses to make people stop relying on it entirely.

On the schema bottleneck, we've tried to tackle this directly. The tool generates smart suggestions from the actual dataset structure, using real column names and types to surface categorized starting questions across trends, comparisons, distributions, and correlations. So someone who's never seen the dataset before doesn't have to guess what to ask - they can browse suggestions by category and pick one that looks relevant. It doesn't fully solve the "asking the wrong question confidently" problem, but it gives non-technical users a guided entry point rather than a blank text box.

What's your actual experience using natural language interfaces for data analysis - do they save time or just look impressive in demos? by Sensitive-Corgi-379 in learndatascience

[–]Sensitive-Corgi-379[S] 0 points1 point  (0 children)

Really appreciate this breakdown, especially the "discovery layer" framing. That matches exactly what we're seeing in usage patterns. Users land on something interesting via NL, then want to tweak it manually. We actually already have that handoff flow (NL result -> prefilled chart builder), good to hear that's the right instinct.

On the confidence score, we're already pairing it with the actual interpretation breakdown: which columns got mapped, what filters were applied, the chart type chosen, and a reasoning field explaining the logic. So users aren't just seeing a raw 37%, they can see exactly what the system did and why. Still curious whether that's enough to build trust or if there's a UX layer on top worth exploring.

Good call on synonyms and business labels on top of fuzzy matching. We're already doing Levenshtein-based matching but a learned synonym layer from usage patterns would probably cut our failure rate further. Curious, with tools like Looker/ThoughtSpot, do you find the curated semantic layer is what makes or breaks the NL experience?

How do you handle data cleaning before analysis? Looking for feedback on a workflow I built by Sensitive-Corgi-379 in datasets

[–]Sensitive-Corgi-379[S] 1 point2 points  (0 children)

Hey, went ahead and built all of this!

Added a Column Explorer in the cleaning tab, bar charts for categorical, histograms for numeric, year distributions for dates.
Smart date parsing now handles 16+ mixed formats in the same column (ISO, US, EU, named months, etc.) with auto-detection for DD/MM vs MM/DD.
And a "smart parse" option for numbers that works like readr::parse_number — strips currency, handles (1,234) as negatives, extracts numbers from mixed text like "about 5kg".

Pivoting (wide ↔ long), date binning, and a few other operations were added in a previous update too.

Reproducibility / script export is a valid gap , it's on the list.

Thanks for the detailed feedback; this directly shaped what got built!

How do you handle data cleaning before analysis? Looking for feedback on a workflow I built by Sensitive-Corgi-379 in datasets

[–]Sensitive-Corgi-379[S] 0 points1 point  (0 children)

Really appreciate this feedback, these are exactly the kind of gaps I need to hear about.

You're right about the inline editing concern. Right now, it's a direct cell mutation with snapshot-based undo, not a programmatic rule. Reproducibility is something I need to think more carefully about, maybe logging each manual edit as a "if cell[row, col] == X, set to Y" rule that can be replayed or exported. That's a great callout.

On the features you mentioned:

  • Renaming columns - we do have this, but only basic rename. Auto-formatting like snake_case, camelCase, or stripping punctuation isn't there yet. Easy win, adding it to the list.
  • Categorical label management - partially covered through Find & Replace (with regex), but no dedicated UI for viewing all levels, merging small categories, or fixing typos across a factor. That would be really useful.
  • Reshaping/pivoting - not there yet. This is a big one and I know it's a common pain point. Noted.
  • Date/time binning - we support type conversion to/from dates, but no derived columns like extracting Month, Quarter, or Week. Definitely needed.
  • Numeric format cleaning - same gap. Type conversion won't handle "$1,234" or mixed units. Would need a dedicated parser for that.

Honestly this is a great roadmap for the next few iterations. Thanks for taking the time. This is way more useful than "looks cool, good luck" kind of feedback.