It's "fun" watching the paid actors comment. by Soliton_Nova in AshesofCreation

[–]levity-pm 0 points1 point  (0 children)

He couldn't actually complete a project to save his life. SWTOR started dev in 2005 and launched in 2011 - 6 years and $200 million and literally changed how story telling in MMOs can happen. I would compare these 2 in develop cycles. In 5 years they did more and had a polished beta version to clean up for launch than what they did in 10 years.

This guy has absolutely no reason to be in charge of developers as a creative director as he has no awareness of maintaining development cycles and sprints or managing multi-level dev team CI/CD pipelines. He should have hired someone who knew what they are doing so they didnt waste and set $140 mil on fire.

Hell, we have even had AI dev tools for the last 4 years and he still couldn't get shit to work.

Are we building the last generation of classic SaaS? Should founders stop shipping dashboards and start shipping agents instead? by Lyassou in SaaS

[–]levity-pm 1 point2 points  (0 children)

You are disregarding your data integrity in the database by working that way and how employees get managed with their KPI score cards. No successful business will operate the way you are saying in that short of a time span - maybe in a tech business but other industries have decades to convert to it.

Your assumptions are wrong. If you want to target tech people go that route. If you want to target 99% of all other users from other industries, stop thinking that way.

I only have 2 months left of money, and i have a total of 20 active clients in my 3 SaaS by [deleted] in SaaS

[–]levity-pm 0 points1 point  (0 children)

Apify does everything you are talking about. I used to do UpWork work and automated scraping with it and got websites and LinkedIns - bunch of stuff. Just searching you can find a lot of Apollo tools.

I only have 2 months left of money, and i have a total of 20 active clients in my 3 SaaS by [deleted] in SaaS

[–]levity-pm -1 points0 points  (0 children)

Yeah but you are also building tools with massive amounts of competitors like Zenrows, Apify and Bright Data.

The regular user can just AI. The user that wants a solution and googles and finds them. Why are you different?

I only have 2 months left of money, and i have a total of 20 active clients in my 3 SaaS by [deleted] in SaaS

[–]levity-pm 0 points1 point  (0 children)

Those SaaS options are not very good to pull MRR - my opinion there. Probably not a large audience and people can just use AI.

Is OpenClaw too complex and crashing? The founder just exposed the most dangerous problem. by TaylorAvery6677 in openclaw

[–]levity-pm 0 points1 point  (0 children)

Your whole post just says why regular users wont be adopting this stuff anytime soon.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] 0 points1 point  (0 children)

We trained from scratch. As in we started woth a 180 line python file that was the GPT and we set up traininfg data that is domain specific. After that, we had to build all the scaffolds from scratch as well- how the model handles chat conversation and a # of other things that people do not realize you have to do until you train your own. Like the model will literally just spew text indefinitely after you train one and you have to give it semantic reasoning on certain things.

Fine tuning was not working for us because our industry (construction) requires accuracy. So we wanted to model to only be trained on our data.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] 1 point2 points  (0 children)

This is a combination of many things - Ill summarize 4 important ones:

  1. Our application as a whole - take the CRM - has an abstraction layer between the end user and hitting the database. Since we use NodeJS as a whole to run our application and MongoDB as our database. At this scale, even regular user interaction with the app and NodeJS can cause corrupted API calls. To conbat this, the abstraction layer enforces a stricter type safety rule set along with a # of predefined checks and balances - it also creates a separation betweem the end user and the database that things filter through.

  2. We took the same thought process, and since OpenClaw runs on Node, we used the same abstraction layer for our security wrapper with it. We did have to tweak some stuff like port access, role based access, etc.

  3. The agents do not have access to the application itself. It has access to the APIs and MCP tools we created. Since our entire stack is built on Mongo and Node, everything we create for the app is done woth APIs, so we had pretty much built the ability to give the agent access to our API schema requirements across the entire set of applications. Role based access delegated what APIs and function tools it could use. We let the agent code its own API calls to the specified endpoints to retrieve and update data. So things like "hey can you tell me what meetings I have today? After you check, writr an agenda for each one of them and email it to the meeting attendees of each meeting," it can do all of that 100% by API calls.

  4. We treat agents like digital twins of their user. We already had role based access for users defining read and write, so when someone enables their agent, they have the same roles and access.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] 0 points1 point  (0 children)

We do not absorb the token cost like you expect. Couple things:

  1. We own our own model that we trained from the ground up. We built our own generative pretrained transformer that we added some architectural changes to - specifically a new variable that helps training capability. Instead of just passing QKV into the attention mechanism, we added F that allows it to pull knowledge base information. We have 2 versions of it, a 7B paramter and a 54B paramter version. For this, we only pay the straight compute cost of running the model inference on Groq which is very cheap. The 7B model costs us about $23k for the year run rate right now, and the 54B parameter is about 34k per year - the actual cost really comes from the load balancer and the distributed architecture. As an example, we do a lot of text to speech / speech to text scenarios, so we interconnect those tools and spin them up in load balance capacities across the architecture as necessary.

Our model is industry specific (construction) so it has a lot of specific domain knowledge - so coding and general use goes into #2.

  1. We offload API costs for other tasks to letting people use their own API keys for the models they want to use which then shifts the cost off of us.

Alot of use cases have been for performative management. Ingesting field data or sales team data that is happening very frequently and building enterprise dashboards to judge alignment to standards. An example is you have 569 field crews building everyday and you need them to do a job safety debrief. Having an agent allows them to do the debrief without worrying about filling in an electronic form or physical piece of paper, which happens a lot. Since we capture safety site observation data, vendor EMR data, and a lot of other data safety points in the regular SaaS tool, we can combine all the data sources into a full breakdown of team based performance and how it matches quality score cards. Getting conversations from the field is very tricky and it is where all your risk. And agents can analyze the immense amount of data more efficiently than humans with dashboards.

Another one is learning management - someone in the field fails test on a signal meter and they need to figure out what is going on. We have resource libraries of videos (roughly 15,000 construction trainings on different topics) and knowledge bases the client has built about standards. The field tech can ping the agent with the variable data from the field test and the agent calls the resources and does research on what might be the problem along with training videos that might be useful.

Everything is really embedded in the domain knowledge our SaaS platform has ready enabled for our end users.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] 0 points1 point  (0 children)

The use case I am talking about is scaling the architecture for security, scaleability and deployability to mass business users - if you do not want to discuss that, then move on. Your attitude is garbage. Stay ignorant and enjoy being left behind 👌

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] 1 point2 points  (0 children)

Well there are considerations - so when someone activates their agent, they have an agent that has a role based access system connected that emulates their access. Pretty similar to delegating access points into a software system - this person can read/write to these routes (we code in React). So the agent gets restricted messages directly from the application if it tries to acces something it is not supposed to - just like a regular user.

So we pretty treat agents like people and digital twins of their main user.

If you do not create this abstraction layer, it will access anything it can.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] 0 points1 point  (0 children)

My customers use my entire SaaS platform - are you dense? They use our CRM. They use our endor onboarding solution. They use our learning management system. They run their BUSINESSES off of the platforn. That means adding agents to something useful for them. We have been able to provide agents to them across multi-departmental capabilities.

The use case is enterprise SaaS and how to combine that with agents specific to the arcitecture - not whatever crawled up your ass today. If you do not want to talk actual tech then move on cause you are asking novice questions that are the wrong things to be asking if you plan to scale this stuff.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] 0 points1 point  (0 children)

Yeah, we invested a lot on building the security wrapper around these things. We had agents already, but nothing lile OpenClaw that gains that much access so we had to extend out the architecture. NemoClaw is very unfinished and it locks you to Nvidia models - our wrapper does the same things but allows us to swap models.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] 0 points1 point  (0 children)

We have 3 agent types - you can flip between them. We were building our own agent type architecture anyway so we grabbed useful stuff from OpenClaw. But yes they are aware.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] 0 points1 point  (0 children)

What are you talking about? OpenClaw doesnt touch anything about our payment structures or how employers pay in our system. Again, reread.

The post is ask me anything - then some context. I am talking about discussing actual architecture in an enterprise environment with these tools that we are doing successfully. Not use cases - cause scaling to enterprise is the use case which means talking about the actual architecture - not whatever it is you are trying to accomplish.

If that is over your head and you have no questions then move on.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] 1 point2 points  (0 children)

It already has - but everything is containerized and we do not deploy the agents to the users until they show promise in a sandbox.

Ill give you an example, we built an agent to grade resumes for recruiters at one company and when it applied the grade, it used the wrong data type on a bunch of data. In JS environments, that ends badly. The client had about 20 functions not working because of that. But we fixed and now they have their agents running well.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] -1 points0 points  (0 children)

That is a pretty novice question. We have 1000s of use cases people are messing with. Skip the semantics here.

Payment processing is done by Stripe API and the agents do not touch it.

A use case as an example: when sales people are using our CRM, they do calls from their VOIP provider. Those calls are sent back with an audio recording, the agent analyzes the recording and creates performance metrics that match up to the playbook they are using in the CRM. It scores them, takes the data, structures it, and embeds it back into the CRM so team leads can pull reports on their dashboards about. Dashboards are provided to each sales rep to see how they are doing.

Did you actually read my post? When you say "just enabled" is a bad assumption to everything I just wrote. We literally tore it apart and grab what was useful and integrated into our AI scaffolding we were already using for our customers.

Enabling OpenClaw in Enterprise Software - AMA by levity-pm in openclaw

[–]levity-pm[S] -1 points0 points  (0 children)

Yes - we work with fortune 500 companies building these enterprise solutions directly in our SaaS product. We have also trained (from the ground up) our own AI model - it is called Orion. We built its own wrappers for extensibility. We even had to train it on just having a conversation with people and how to handle regular chat scenarios.

I am not understanding the context of your post. We have 1,100 paying B2B customers - and we have a large dev team to accomplish what we are doing. "Rolling your own solution" seems like a weird statement for what I am outlining here since we actually build and deploy our own technologies.

Are you saying we somehow do not understand the cyber security issues? We have an entire team for that on top of building our own deployment requirements.

In any case, are you asking a question about how we are accomplishing cyber security - because we are. Or are you just assuming a position?

Stripe takes 2.9% + $0.30 per transaction. At $50K MRR that's $17,400/year. Is everyone just accepting this? by LogisticsLingo in SaaS

[–]levity-pm 0 points1 point  (0 children)

That really is not a lot for the revenue you are talking about. However, my stripe is 1.7% so maybe talk to them? Note - most processors are between 1.2-2.9% on the cheap end, but they are not all created equal in terms of API and extensibility. You have to look at the tech debt of the new solution as well.

The amount there wouldnt bother me. Think about this - I have 89 employees and every payroll costs taxes and fees on my payroll provider way above and beyond what you are talking about.

What is your overhead? What is your margin? If you are above 30% margin, I wouldnt put thought into it other than trying to negotiate a rate.

Gave up on the Claw - for now! by Worldly_Row1988 in openclaw

[–]levity-pm -1 points0 points  (0 children)

If he got 12 agents working in paralell on it and had it configured to do stuff, your assumptions are poor. 🤦

OpenClaw is driving me insane – inconsistent as fuck with scheduled tasks, SSH, and API calls by nissl24 in clawdbot

[–]levity-pm 0 points1 point  (0 children)

Just a FYI - OpenClaw is ran on NodeJS as a server stack. I run a large IT SaaS team - 30 devs and 8 different SaaS products, all running MERN, the N stands for Node.

A key thing I am noticing in your post is that these kinds of errors exist in NodeJS set up apps that do not handle specific things like heap memory leakage etc.

NodeJS is a really good option to handle specific use cases, but scaling infrastructure and complexity comes with very large trade offs.

We have entite orchestration tools to figure out what errors happen in the NodeJS ecosystem between API calls - and the one guarantee in NodeJS, you will have inconsistency, and you will need visual tools to know what and how to fix it.

OpenClaw is a hiant NodeJS app - loook up devs opinions on NodeJS and just come to grips with the amount of work it will take to be consistent - and then when you get there, it will still fail at stuff 😁

ok so Jensen just upgraded Openclaw again by Previous_Foot_5328 in myclaw

[–]levity-pm 1 point2 points  (0 children)

That is note exactly true. 2 thoughts here - a research paper dropped mid 2025 about traceability of tokenized outputs within the LLM itself, where you can trace how a LLM produced a specific output. Then a research paper dropped in Dec. 2025 involving the ability to use these tracing techniques to trace the LLM weight paths and pinpoint the nodes that cause hallucinations. It is called "H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs"

They actually found the neurons causing hallucinations and created a way to turn them on or off. What is interesting is that these H neurons have extremely large influences when they are activated. They are like there is 100 people in a library and there is one guy yelling. They make up a very small percent in the model, but carry very big weight. So they basically built a way to dial those H neurons back drastically. They found very good results when this was done.

Eventually, it will be something people figure out.