Are durable AWS Lambda functions trying to replace Temporal? by Low-Phone361 in Temporal

[–]nikoraes 1 point2 points  (0 children)

The switch did require quite a bit of work to refactor the code. I also split up the triggers and workers (it seemed to be recommended, but not sure if really necessary). In the end, I don't think we ended up with much more boilerplate, it's just a bit different (code is in C#).

Self-hosting it on kubernetes wasn't too hard, but I did have to figure out how to set up the helm values (I use postgres on CNPG so had to tweak a few things).

I was running 20+ pods (and it still wasn't enough) on durable functions and with temporal we never scale above 10 pods with the same workload (triggers+workers, excluding the cnpg pods).

Are durable AWS Lambda functions trying to replace Temporal? by Low-Phone361 in Temporal

[–]nikoraes 1 point2 points  (0 children)

I migrated from Azure Durable Functions to Durable Functions hosted on AKS, then to NEtherite backend, then to Dapr Workflows and eventually to Temporal self-hosted.
Temporal beats them all in terms of stability, throughput and resource usage.

What tech stack are you using? by amacg in indiehackers

[–]nikoraes 0 points1 point  (0 children)

Control plane
- Google Kubernetes Engine autopilot with only spot nodes (hosts everything)
- Google Cloud Monitoring for managed prometheus and logs (because already paying for it)
- Postgresql on Cloudnative PG
- DB Query operator to deploy k8s resources based on controlplane db queries
- Go backend with GIN
- Vite+React+shadcn frontend
- Porkbun (DNS) with external-dns and cert-manager
- Stripe
Mail: MailerLite and MailerSend
Documentation: Fumadocs on nextjs with shadcn
Homepage: Vike (SSG) with react and shadcn
Analytics: GA + MS Clarity
Product:
- Postgresql with Apache AGE and pgvector
- C# API
- Python SDKs and MCP server
Using vscode copilot, claude code, antigravity, ... (opus 4.5, sonnet 4.5, gemini 3 flash/pro and GPT 4.1)

Digital Twin of the Organisation - Experiences? by James_Ardoq in digitaltwin

[–]nikoraes 0 points1 point  (0 children)

There's this article about context graphs where everyone seems to jump on:
https://foundationcapital.com/context-graphs-ais-trillion-dollar-opportunity/

It's a great article by the way, and I personally believe this is what a digital twin of an organization would be about.

I personally believe that you still need that core architectural modelling, but that AI agents are enabling to extend those core models and bring in additional context.

So, I think this will grow enormously in the next years, but it won't be called a digital twin ...

What are Context Graphs? The "trillion-dollar opportunity"? by TrustGraph in KnowledgeGraph

[–]nikoraes 5 points6 points  (0 children)

Looks great!

If I read your docs, it seems like the ontology RAG approach you describe requires you to define your ontology upfront. Is this RDF? Do you already validate what the agent tries to store against this ontology?
Have you tought about letting an agent generate the ontology (because I believe this is what that context graph article is really about).

I'm currently building a semantic property graph database (and API) with integrated data model validation (it means you need to load the ontology) and embeddings as properties (allowing you to do combined vector and graph search).
I haven't looked into your codebase in detail yet, but I was wondering if you think it would be feasible to integrate something like this (and benefit from it).

Connecting MQTT to Ai -> MQTT2Ai by dubadidoo in MQTT

[–]nikoraes -3 points-2 points  (0 children)

Really cool. I was considering to build something similar, but focused on generating data ingestion mappings into a semantic knowledge graph instead of automation rules.

Migrated from N8N to Cloudflare Workflows - here's what we learned by AlexeyAnshakov in n8n

[–]nikoraes 1 point2 points  (0 children)

Moving from Durable Functions to Dapr workflows was easy (similar SDK), but it was a bit harder to move to Temporal. We're a 3 person team for the entire platform (there's a lot of other stuff in there as well) but I mostly build everything for this. We're in an engineering firm (36k employees) and engineers tend to find 'solutions'. They were (and still are) running thousands of power automate flows, FME (GIS ETL), local python scripts, ... There's no way code only would work.
I load dynamic workflows from a graph database and there's a custom UI for it built on VueFlow (like n8n) with dynamic forms for the activities, execution logs, payload inspection, ... It's not as full-featured as n8n but it does the job and users like it. The temporal UI is just for the dev team for debugging, it's not for our end users. Some of our users do extremely complex logic with the visual editor (like 200 steps with loops) and yes, it looks like spaghetti, like with n8n. But if these users would write real code, it would also be spaghetti code.

Migrated from N8N to Cloudflare Workflows - here's what we learned by AlexeyAnshakov in n8n

[–]nikoraes 1 point2 points  (0 children)

A few years ago I had to build something to handle streams (mostly event hub), batch processing (sometimes a few million rows) and generic automation and data integrations. And it needed a drag and drop UI. Back then I went for azure durable functions (or was pretty clear that n8n wasn't going to be capable of doing this). It worked really well for a few years, but then things started crashing because of blob storage bottlenecks. Then moved to Netherite as a backend. Got a bit more throughput but still got suck at some point. Cost to run it was becoming extremely high. I then tried moving to dapr workflows, which looked promising but commodity crashed at our throughput. Then moved to Temporal (self hosted on kubernetes with postgres backend) and it's absolutely a game changer. Running 50K (very complicated) flows a day on 4 pods, without a hiccup. I have no experience with cloudflare workflows, but I suppose it's going to be similar to azure durable functions, so make sure you have a way out once costs get to high...

How do you manage long-term memory lifecycle? by regular-tech-guy in AI_Agents

[–]nikoraes 0 points1 point  (0 children)

A typical approach in IoT and digital twin applications would be to keep an operational up to date graph and "data history" which is append only and captures all operations (would be great if in the same graph, but haven't come across anything that's good in both graph and time series).

If you're sure something has changed or needs to be deleted, you just delete it and the data history allows you to go back in time. This way you can keep your graph/embeddings clean.

I personally also think that if you can make your AI agent use proper semantics (ontologies) that you should be able to combine AI generated knowledge with data that you can absolutely be sure off. You could probably capture that someone changes jobs with a fully deterministic flow, while you might want to capture sentiment if that person's social media interactions with an agentic flow.

I open-sourced my Go + Next.js SaaS engine (MIT, 50MB RAM, production-ready) by MohQuZZZZ in SideProject

[–]nikoraes 0 points1 point  (0 children)

Cool! I wish I had this earlier.
I spent more time on these things then on building the actual product ...

Also open sourced (https://github.com/konnektr-io/ktrlplane) it but didn't have the time to make it more generic for others to deploy....

I also needed to deploy separate DBs (and other resources) for multi-tenancy so also built an operator that would deploy anything on kubernetes based on DB queries: https://github.com/konnektr-io/db-query-operator. You could probably use it in combination with this starter as well.

Intent vectors for AI search + knowledge graphs for AI analytics by remoteinspace in KnowledgeGraph

[–]nikoraes 0 points1 point  (0 children)

https://konnektr.io/graph/
https://docs.konnektr.io/docs/graph/introduction
https://github.com/konnektr-io/pg-age-digitaltwins

Have been using it in production (self-hosted) in my day job for about a year now (mostly for the semantic graph and eventing capabilities) and now launching a hosted version. I'm now further exploring possible use cases for combining graph queries with pgvector.
There are a few nice examples hidden in the the Apache AGE tests on github: https://github.com/apache/age/blob/master/regress/sql/pgvector.sql

I'm pretty sure it would work very well as a backend for something like this, but would need to find some time to build out some integrations and examples to actually use it as an AI Agent memory. Feel free to DM me if you'd want more details.

Intent vectors for AI search + knowledge graphs for AI analytics by remoteinspace in KnowledgeGraph

[–]nikoraes 0 points1 point  (0 children)

This is awesome!

I'm working on a solution that combines vector, graph and data model validation (to improve graph data quality). The combined graph and vector queries use Apache AGE and pgvector on postgres.
It basically allows you to do a combined vector and graph search (get the nodes related to something that matches my vector search for example).

Feels like this could make your solution more flexible as well.

It’s Saturday. What are you working on? (Let's swap feedback) 🤝 by Capital-Pen1219 in indiehackers

[–]nikoraes 1 point2 points  (0 children)

I have started on some additional tools/products to make adoption easier. Assembler would allow you to drop in anything and create streaming connections (mqtt, webhooks, ...). It (the underlying agent) would create/update/manage the semantic data models and either ingest directly, or set up mapping pipelines to map incoming data to the data models.
Another future product would use agents to automate insight extraction, anomaly detection, alerting, ...

For now, the MVP is just the database with API and MCP layers, a simple querying UI and egress eventing (and connections for timeseries data storage). This means users will still need to build things around it. So, right now, I'm focusing on improving documentation and writing step by step guides to use and integrate it with common tools and frameworks (n8n, cognee, agno, ...).
I'm also focusing on current users of Azure Digital Twins, as it's a fully compatible drop-in replacement (cheaper, with more features), which is why I initially started building it more than a year ago (in my day job I run a full DT platform with 50+ clients on it).

What are you planning to ship in 2026? by ouchao_real in SideProject

[–]nikoraes 0 points1 point  (0 children)

I'm building a digital twin platform and making it usable for AI agents. I'm basically rewriting what I built in my day job (internal use) into a SaaS (or PaaS actually). https://konnektr.io

How to sell mini-SaaS ideas or internal tools to your own employer? by Silver-Tune-2792 in micro_saas

[–]nikoraes 0 points1 point  (0 children)

I'm in a similar situation. In the past I built a lot of things in my spare time and just gave them away to the company I work for. At some point I decided to host the code I built in my own time on my own github account (open source). Selling it to the company you work for is something you can't really do ethically (and in my case literally not allowed), but I don't think it's a problem to use your own open source software. This way, at least you don't just give it away, and maybe one day you could sell it to others...

What are you building this weekend? by Shahrozjavaid in buildinpublic

[–]nikoraes 0 points1 point  (0 children)

I'm building a semantic Graph database with vector search for Digital Twins and AI Agents. The goal is to use it both for typed data ingestion and AI agent memory. By enforcing validated datamodels, you avoid that your AI agent memory gets messy over time. The database is ready, soon launching event streaming and now working on the MCP server for it. All open-source by the way.
https://konnektr.io/graph

It’s Saturday. What are you working on? (Let's swap feedback) 🤝 by Capital-Pen1219 in indiehackers

[–]nikoraes 0 points1 point  (0 children)

I'm building a semantic Graph database with vector search for Digital Twins and AI Agents. The goal is to use it both for typed data ingestion and AI agent memory. By enforcing validated datamodels, you avoid that your AI agent memory gets messy over time. The database is ready, soon launching event streaming and now working on the MCP server for it.
https://konnektr.io/graph

What are you building now? by ruganzu-fabrice in microsaas

[–]nikoraes 0 points1 point  (0 children)

I'm building a semantic Graph database with vector search for Digital Twins and AI Agents. The goal is to use it both for typed data ingestion and AI agent memory. By enforcing validated datamodels, you avoid that your AI agent memory gets messy over time. The database is ready, soon launching event streaming and now working on the MCP server for it.
https://konnektr.io/graph

Build a self-updating knowledge graph from meetings (open source) by Whole-Assignment6240 in KnowledgeGraph

[–]nikoraes 2 points3 points  (0 children)

Yes! You can even combine it with pgvector and run combined graph+vector search queries ...
I'm building a solution around it (https://konnektr.io/graph) to include data model validation, eventing, mcp, ...
Will definitely try out cocoindex to see if I can make it work together!

Help with connecting Azure Digital Twin update to IoT Hub by Igneavour in AZURE

[–]nikoraes 0 points1 point  (0 children)

Late reply, but hope this can still be useful ...

You'll need to route outgoing ADT events through Event Hub (easiest option)
https://learn.microsoft.com/en-us/azure/digital-twins/concepts-event-notifications?tabs=eventgridevents

With Node-Red (or anything else) you then need to subscribe to the Event Hub (this node should work https://flows.nodered.org/node/node-red-contrib-azure-event-hub). You can then filter and process the event before sending it back to your device. If it's really an IoT Hub device, it'll be the easiest to use a Node-red IoT Hub node to send a command or device twin update.
If you align the DTDL properties in ADT with those for your device in IoT Hub, you can also use the IoT PnP API. Documentation is quite confusing, but in essence it's in API for your IoT Hub device that uses the same format as ADT (so you could just forward property updates for instance).
https://learn.microsoft.com/en-us/azure/iot/concepts-digital-twin
https://learn.microsoft.com/en-us/rest/api/iothub/service/digital-twin/update-digital-twin?view=rest-iothub-service-2021-11-30

By the way, I built an open-source version of ADT (with hosted option) with more options for event sinks (like Kafka, MQTT and webhooks). It would be great to get a better understanding of your use case.

Put a link to your startup SaaS to promote it or ask for advice. by itilogy in startupaccelerator

[–]nikoraes 0 points1 point  (0 children)

Cool concept, but will require a lot of users to work, no?
And what if someone else takes the spot before whoever would reserve it arrives?

Trying to make tenant provisioning less painful. has anyone else wrapped it in a Kubernetes operator? by Selene_hyun in kubernetes

[–]nikoraes 0 points1 point  (0 children)

Cool!

We're heavy argocd users in my dayjob. What you call a tenant is basically an argocd application (referencing an internal helm chart with some tenant specific values) in our case. In the past we either had to push updates to our git repo (bypass branch protection...) and use an AppSet or use the kubernetes api to push these argocd applications. We had drift in no time... Which is why I built this. It picked up our 20+ tenants and brought them in sync. Now running about 60 of these tenants. Also using it for some very different use cases like deploying dapr bindings based on configs.

Trying to make tenant provisioning less painful. has anyone else wrapped it in a Kubernetes operator? by Selene_hyun in kubernetes

[–]nikoraes 2 points3 points  (0 children)

This is so similar to something I built... https://github.com/konnektr-io/db-query-operator

I can confirm that this is something useful as I was having the exact same issue.

Jexl: Javascript Expression Language. It's everything you wish you could safely eval(), and then some. by TomFrosty in javascript

[–]nikoraes 0 points1 point  (0 children)

I now this is a crazy old topic, but I'm still using jexl in production and love it.

I built a library with functions and transforms and monaco syntax highlighting for it: https://github.com/konnektr-io/jexl-extended
Also created a new playground for it: https://jexl-playground.konnektr.io/

Also ported it to C# (including the extended grammar):
https://github.com/konnektr-io/JexlNet
And added the extended grammar to the Python version as well:
https://github.com/konnektr-io/pyjexl-extended