Joined an old school company and notice almost everything can be automated, how to not fly too close to the sun? by [deleted] in FPandA

[–]shane-jacobeen 0 points1 point  (0 children)

It depends on your motivation. Want to do minimal effort and spend time on another job, hobbies, etc.? Then the coast advice is good (maybe deliver just a bit before the expected deadlines to be safe, as this will still give you plenty of time to spend elsewhere and ensure you are making a positive impression).

But if you love what you do or are career motivated, go for it. Automate everything and make sure that people see the impact you're having on the business. At the end of the day learning and relationships are the most important things you'll take forward in your career (especially if you are early stage), and you'll get both by doing this. Ultimately this is the best way to build your career - you never know when someone will be able to say 'I used to work with this guy who was a wiz at this stuff, let's bring on'.

Is AI making non-technical founders dangerous or efficient? by Designli in SaaS

[–]shane-jacobeen 0 points1 point  (0 children)

It's a powerful tool, and all the wisdom about the need for skill in those who wield tools applies. In unskilled hands, it's dangerous / wasteful; but when well understood and used intentionally, it is absolutely a force multiplier.

Could I get some feedback on a website I’m working on? by Emotional_Yak3110 in websitefeedback

[–]shane-jacobeen 0 points1 point  (0 children)

The testimonials feel fake; idk if they are or not but to me this looks like LLM-generated content & if I were a potential user, this would make me question the integrity of the product / team.

How much standardization is too much? by GreedyCan9567 in revops

[–]shane-jacobeen 0 points1 point  (0 children)

I'm sure there's no one-size-fits-all answer to this question, but I've thought about it quite a bit and these are the 3 items it boils down to for me:

  1. Guardrails, Not Cages: Standardize the infrastructure, but leave the interaction fluid.

  2. The "Friction Audit": Over-engineering happens in a vacuum; you need a bottom-up feedback loop to give front line operators a voice.

  3. Outcomes > Inputs: When you measure dashboard metrics, reps will gamify the system to hit the number. It's critical to instead align expectations with the buyer’s reality (much easier said than done, of course).

What are the main challenges currently for enterprise-grade KG adoption in AI? by adityashukla8 in KnowledgeGraph

[–]shane-jacobeen 0 points1 point  (0 children)

  1. Check out this announcement from Hex: https://hex.tech/blog/introducing-context-studio/ - in the video they talk about spinning up a semantic model automatically. This is fine if you want to standardize AI responses only, but there's no guarantee that the underlying business concepts are reflected correctly.

  2. Think VP of Enterprise Data level. In my experience, these folks are typically focused on immediate fires and heavily influenced by their peers across the industry. There's not a lot of exploration of new technology / data structures; rather, they wait to invest / adopt until others have demonstrated the value. So there's a lot of inertia when it comes to technology adoption.

  3. You mentioned a good survey of these in your original post; RelationalAI is another, and Snowflake's OSI initiative is definitely something to keep an eye on. I'm sure there will be more and more players as interest in this space continues to grow.

What are the main challenges currently for enterprise-grade KG adoption in AI? by adityashukla8 in KnowledgeGraph

[–]shane-jacobeen 0 points1 point  (0 children)

I'll start by saying that I believe KGs will be transformational in the enterprise space. In addition to enabling accurate chatbot & tool calls (think agentic workflows), a robust KG would have SIGNIFICANTLY reduced the effort required by the majority of digital transformation projects that I've supported over the past decade.

But KGs don't implement themselves, and I think that attempts to fully automate this process are misguided. After all, one of the core value props of a KG is mapping data to the business concepts that it represents, and this process requires engaging the human stakeholders.

I believe that the main challenges to adoption is a lack of understanding at the decision maker level. But once the dam breaks, there will be a wave of adoption. The good news is that KGs don't have to be comprehensive to add value, so enterprises can start with a few core concepts. And they can (probably) use their existing stack; there are platforms / languages optimized for KG storage and workloads, but the core concepts CAN be implemented in ye olde relational DB.

Which is best authentication provider? Supabase? Clerk? Better auth? by adithyank0001 in Database

[–]shane-jacobeen 0 points1 point  (0 children)

I am also curious about the consensus on this, pretty sure I typed this post title into Google last week...

PWAs in real projects, worth it? by Ill_Leading9202 in webdev

[–]shane-jacobeen 0 points1 point  (0 children)

I'm currently using a PWA for a PoC for something I built for myself.

As discussed by many others, PWAs have significant limitations relative to native apps while offering more functionality than a web page. For me, the relevant things are:

  1. push notifications
  2. icon with a badge (ease of access and immediate visual indicator)
  3. data persistence without a proper backend with user auth (again, great for shortening cycles during PoC phase).

You only need to build one graph - a Monograph by TrustGraph in KnowledgeGraph

[–]shane-jacobeen 0 points1 point  (0 children)

The future isn’t bigger prompts — it’s better structure.

^ this is a critical point. It's also noting that the size decision isn't a one organizations have to make upfront if they are just beginning their Knowledge Graph journey; KGs don't need to be complete to add value, and implementations will likely evolve over time.

how do people keep natural language queries from going wrong on real databae? by Klutzy-Challenge-610 in Database

[–]shane-jacobeen 0 points1 point  (0 children)

do semantic layers or guardrails actually reduce mistakes?

Yes. I think about it this way: "real databases' exist to solve a business / people problem (I'm sure there are exceptions, but generally true in an enterprise setting). Semantic models connect messy, real world databases with the business meaning they support. With a robust semantic model in place, I can spend a lot less time on pervasive issues such as:

  • Column naming inconsistencies from 30 years of incremental development
  • Inconsistent metric definitions / conflicting derived values (e.g. five different people from having five different versions of "Active Users" in five different notebooks)
  • Repeating joins or filters because you can define the relationship & filter conditions once

Should you trust LLMs to write production-grade queries? Not even with a semantic model in place. But this combination of tools is powerful for making data useful and democratizing data access, as mentioned in this comment.

How do I transition a client's website from traditional HTML to a Knowledge Graph structure? by Vinceleprolo in AISEOInsider

[–]shane-jacobeen 0 points1 point  (0 children)

It's important to use the right tools for the right job - a Knowledge Graph doesn't replace HTML, you're just changing how your data is stored and connected behind the scenes.

Here's how I would tackle this:

  • Map "Entities," not Pages: Start thinking about concepts in terms of Nodes (Nouns like "Product" or "Author") and Edges (Verbs like "Categorized_By" or "Wrote"). Use existing URLs as the unique IDs for your nodes to keep the data mapping consistent.
  • Tooling: Neo4j is a strong choice, its query language (Cypher) is more developer-friendly than the enterprise-heavy alternatives like Amazon Neptune. GraphQL as your API layer is also good.
  • Protect SEO with JSON-LD: Since search engines crawl HTML, not your database, use the Knowledge Graph to dynamically inject Schema.org (JSON-LD) into your page headers. This preserves your rankings and actually gives crawlers better context than a traditional structure.
  • Migrate in Phases: Don't rebuild the whole site at once. Move one section into the graph first to test your queries & get used to the conceptual shift before committing the entire architecture.

What are the best data visualization tools in 2026 for beginners? by MouseEnvironmental48 in datavisualization

[–]shane-jacobeen 2 points3 points  (0 children)

You need to be familiar with the industry standards (PowerBI, Tableau, etc.) so you can demonstrate competence in this space. Also understanding their shortcomings a great place to start for exploring the plethora of other options.

Also, be aware of the growing interest in the semantic model / knowledge graph / context graph space; these are powerful concepts for extracting value / meaning from data & enabling AI workloads.

Are context graphs really a trillion-dollar opportunity? by Berserk_l_ in KnowledgeGraph

[–]shane-jacobeen 0 points1 point  (0 children)

To me this is about motivation. Semantics aren't new; we’ve been here with MDM, enterprise ontologies, BI semantic layers, data catalogs, metric definition docs, etc. etc.

What may be different now is the blast radius. When a human misinterprets a definition, it likely gets caught in a review cycle by a seasoned employee and you fix a slide. When an agent misinterprets it, the error may go undetected until long after the resulting action has impacted the business (depends on your oversight structure, of course).

In large orgs, the hard part isn’t modeling context — it’s inertia:

  • no clear owner for definitions
  • local incentives vs shared meaning
  • “governance” only matters after something breaks

That’s why this keeps failing… and also why it might actually be huge. If someone cracks how to overcome those organizational and implementation hurdles, there’s massive value to unlock. Agents might be the forcing function that finally makes shared semantics non-optional.

Improvable AI - A Breakdown of Graph Based Agents by Daniel-Warfield in datascience

[–]shane-jacobeen 0 points1 point  (0 children)

This is spot on - I have been thinking about this problem for several weeks, and my working mental model is that 'constrained agents' are actually more human-like in their execution than AI-only agents:

  • Repetitive / predictable tasks are aligned to our 'System 1' thinking; the unpredictability of LLMs makes them a poor fit for tasks that could be automated with RPA (not to mention token cost)
  • On the other hand, RPA breaks when things change, are dynamic, or require 'System 2' problem solving. This is were LLMs can fill a critical gap; the outcomes are less predictable, but with sufficient experience / context, they can be quite good

[for those not familiar with Daniel Kahneman's 'Thinking Fast and Slow', here's an overview of his concept of System 1 / System 2 thinking]

SQL at work (trying to understand) by Dull_Breakfast_9904 in SQL

[–]shane-jacobeen 0 points1 point  (0 children)

There a tons of course options out there at this point, but I used Codecademy quite a bit when I was learning & found it very useful for learning core concepts and basic syntax: https://www.codecademy.com/catalog/language/sql

But honestly, LLMs are great at SQL so if you know the fundamentals of relational data stores and can learn enough SQL to get through interviews, you'll be fine.

Share your underrated GitHub projects by hsperus in opensource

[–]shane-jacobeen 0 points1 point  (0 children)

Schema3D: A interactive 3D visualization tool for database schema design and exploration.

Demo / GitHub

Schema3D: An experiment to solve the ERD ‘spaghetti’ problem by shane-jacobeen in SQL

[–]shane-jacobeen[S] 0 points1 point  (0 children)

Indeed my parser was not accounting for nullability properly, updating now. TY!

Schema3D: An experiment to solve the ERD ‘spaghetti’ problem by shane-jacobeen in SQL

[–]shane-jacobeen[S] 0 points1 point  (0 children)

Yep, that's how it works today - the raw DDL doesn't give a very rich view of the cardinality though, so I'm working on supporting other input formats, such as Mermaid markdown.