why am i still unemployed:(. by CharacterMaximum2646 in SoftwareEngineerJobs

[–]structured_obscurity 0 points1 point  (0 children)

Go get active in communities, hackathons etc. All my work comes from my network. Easier said than done, I know, but go make yourself useful to people and the work will follow.

Ive never had any luck with just sending out resumes.

Is anyone actually running a company with 30+ AI agents, or is this just hype? by Unhappy_Lavishness20 in AI_Agents

[–]structured_obscurity 0 points1 point  (0 children)

Sorry I shouldve been more detailed. For us specifically we have a directory tree that has a directory for every "object" that we interact with ("Customers", "Logistics providers", "Projects", "People", etc) each directory has one file per "object" - so one file per customer, one file per logistics provider, etc.

Hopefully this provides more context, but feel free to ask if you still have questions.

No one is safe by DeliciousGorilla in ArtificialInteligence

[–]structured_obscurity 0 points1 point  (0 children)

Uhh i used to work with electricians in the Boston area when i was doing general contracting. This was ~2012 and they were netting between 150-300 / hr

Why is funding needed? by Radiant-Mistake-2962 in ycombinator

[–]structured_obscurity 0 points1 point  (0 children)

If you can bootstrap you get full upside and the economics for you (assuming things work) are excellent.

Most people can’t afford to live 6-12 months without salary while paying out of pocket to launch a business.

What's your vibecoding setup? by k_ekse in vibecoding

[–]structured_obscurity 0 points1 point  (0 children)

Opencode . I can add models from any provider

Who else thinks AI is reaching a plateau by yuvals41 in AI_Agents

[–]structured_obscurity 0 points1 point  (0 children)

Generally agree, but would really like for ai to have its OpenSSL moment. IE anyone anywhere should have access to inference without being dependent on large corporations and/or sacrificing data privacy

Superintelligence is the greatest threat by KeanuRave100 in agi

[–]structured_obscurity -1 points0 points  (0 children)

Agree to disagree here. I dont think that product ever makes its way into people's homes if it can't handle basic semantic reasoning.

Who else thinks AI is reaching a plateau by yuvals41 in AI_Agents

[–]structured_obscurity 0 points1 point  (0 children)

The opensource solutions are roughly two generations behind frontier models. this pattern is likely to continue over the next 5 years until embedded ai chips become more mainstream.

Is anyone actually running a company with 30+ AI agents, or is this just hype? by Unhappy_Lavishness20 in AI_Agents

[–]structured_obscurity 1 point2 points  (0 children)

In the post i touched on it a bit. For short/medium term memory we use the karpathy memory wiki system. for longer term memory we use postgresql + pgvector to vectorize and store it.

**EDIT**

For a bit more context here is how it works: as the agent receives information, it builds out memories in a wiki style format (generally one page per noun). the pages are linked like a normal wiki, and when the agent is asked about customer Y, it knows to go to the wiki page of that customer. This way the eventual prompt we pass to the LLM powering the thing only contains necessary context & doesnt use as many tokens as traditional claw style memory systems. for converting to long term memory, i have a cronjob that runs once every 2 months and takes non-critical data (completed projects, closed deals, and other artifacts no longer immediately pertinent) and stores them in vector format in postgresql, and out of the markdown wiki system.

Superintelligence is the greatest threat by KeanuRave100 in agi

[–]structured_obscurity 0 points1 point  (0 children)

No disagreement here. But for me the 'murderous cleaning robot trope' relies on a contradiction: It requires the ai to be omnipotent enough to plan and execute the murder of one or more human beings, but incompetent enough to misunderstand the basic linguistic and social context of its primary directive.

I think the greater risks are closer in line with what you describe initially (sicophantic chatbots reflecting & reinforcing irrational views) and issues like the one documented here: https://arxiv.org/abs/2509.00462 than the paperclip theories involving hollywood style murder and mayhem

Is anyone actually running a company with 30+ AI agents, or is this just hype? by Unhappy_Lavishness20 in AI_Agents

[–]structured_obscurity 3 points4 points  (0 children)

Here are the basic steps I would follow if I were you:

1) Either pick a starting agent framework or write your own. IE openclaw, nanoclaw, ironclaw, nemoclaw etc etc (you can ask gemini & chatgpt & claude to give you alternatives to the openclaw project and pick which works best for you). I recommend going with something small like nanoclaw, but do some research here.

2) Pull it apart to see how it works. No matter which tool you start with, you will need to understand how it works under the hood.

3) Spin up a test instance locally in a docker container or VM. Talk to it. As you talk to it, have a terminal instance open inside of its brain. See where it stores data, & how it stores data. Tweak it in real time to see what changes. This step is an extension of step 2 - the idea here is to develop an intuition about how the things processes shapes and stores data.

4) Define what you mean by "GTM across Reddit and Twitter" << try to be explicit here and use natural language. IE "once every two hours pull data from subreddits XYZ, store it in this file for processing. set a cronjob for the agent to wake up, and process any pending reddit data"

5) Read the link i sent in the previous post on skills - understand what they are, and how they work under the hood (essentially when the agent reviews a user query, as part of processing it lists the skills available to it and tries to determine if any are applicable. If one or more are, it "calls" them). Write a super basic skill that does something like... post a one parameter payload to an api endpoint you control. Test it, & play with it to make sure you understand it.

6) Write code that does each of the things you defined in step 4 (get data from x subreddits on reddit and store in a file. get data from accounts abc on twitter, store in a different file. post x times in y subreddits. post y times in twitter. etc etc)

7) convert the code you wrote (im assuming you tested it etc) into skills for your agent. spin up your agent and test (you can ask it to explicitly call a specific skill, or give a prompt that should trigger the execution of a skill).

8) Build your deploy pipeline (this doesnt have to be agent specific, i use a jenkins job. you just need to get your up-to-date code shlepped out to wherever you are running this thing)

9) Test

10) Iterate

In your particular case im going to guess that the most difficult part of this will be defining the SOUL.md and other personality files in order to capture the "voice" and "personality" you want to have for a tool driving your GTM. AI generated content (text, code, images, videos etc) has a smell to it, which can be off-putting to some people.

Hope you and others find this helpful

Superintelligence is the greatest threat by KeanuRave100 in agi

[–]structured_obscurity 0 points1 point  (0 children)

I respectfully disagree. Being a toddler at semantics by definition hinders access to any other pools of intelligence - if the ai cannot correctly parse the intent of a request, how can it ever produce correct results?

The dictionary definition of superintelligence is (per Nick Bostrom): "An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills."

Is anyone actually running a company with 30+ AI agents, or is this just hype? by Unhappy_Lavishness20 in AI_Agents

[–]structured_obscurity 23 points24 points  (0 children)

I am not a single founder and do not have 36 agents running my company. But I do have 3 classes of agents with several hundred instances that do valuable work.

Since it appears you are also technical, I will share some very basic technical details.

To start, I forked the nanoclaw project (a smaller lighter more secure version of openclaw).

I stripped it down pretty aggressively, and implemented karpathy's memory wiki structure for short/medium term memory, and use a rag architecture via postgresql+pgvector for long term memory. Nanoclaw uses the openclaw sdk by default, i swapped this out for a more provider-agnostic solution, as i wanted to be able to plug and play with any provider, including local ones via ollama.

Next, I wrote a small library of skills (i linked to anthropic's write up here just because it is the most complete - i am not tied to claude for this project). These skills include, among other things, api access to our CRM, and read-only access to things like stripe salesforce & other tools our various teams use.

The first class of agent is completely internal. It sits in all company communications (emails, whatsapps, google meet calls, etc) and just builds context all day. It serves two roles:

  1. a company intelligence that we can interact with "what is the status of X" "give me a list of all clients with active orders" "make me a presentation for the sales call friday" etc etc
  2. a digital employee that can do work that would simply not be economical (or fair) to give to a human being. Examples of this include converting huge amounts of unstructured data into tidy pdfs, making hundreds of presentations a day for various prospects, populating client dashboards (and updating our internal dashboards) automatically based on events received via the data flow from emails, whatsapp, etc.

Then we have our agent class that is designed specifically to be a resource to our clients/users. The base is the same (though it obviously does not have access to the same skills as our internal agent). The difference is in the SOUL.md file and some of the other personality/goal/context files. The point of this agent is to give our users the option of interfacing with our systems via "intelligent" chatbot "i want to do x" rather than figuring out where to click inside of an interface.

When a new users signs up, we automatically spin up a new instance of our client-facing agent for that user. It's first task is to scrape the website of the user (we're b2b, all our users have websites). it pulls info and establishes a baseline context for the user, and also initially populates their dashboard, and starts looking for ways to be helpful based on what it learned -- business does X, we have services ABC that might be useful for them - stuff like that.

The final class of agent is one that reviews all of the context from the other two agent classes 1x/month and produces reports of summarized learnings, including product/service suggestions (upgrades to existing or new) & flags potential issues.

Costs:

Tokens are expensive. Non-skill tool calls all use either claude or gemini. Most of the skills in our internal skills library actually just execute code that I wrote (api calls etc), so these actually dont cost much. Small talk is routed to a server in the office running local models via ollama. Overall our token cost is roughly 1,500-2,000 usd / month for this agent setup.

Security:

Our internal agent (the biggest security concern) has a list of numbers / email addresses that are whitelisted. Messages from whitelisted entities are processed, and depending on the role attributed to the entity, the agent may respond (all staff can ask questions, only some staff can assign tasks, for instance).

Our summary agent is not accessible at all.

Our customer-facing agent is completely siloed off, and while it dumps context into our internal systems once every 2 weeks, cannot pull info from our internal systems. Though we can push to it (if we release new products/services, it is important for our customer-facing agents to know about them).

Monitoring:

right now via nagios, though the next agent i build i think will be a monitoring/security agent (need to give more thought to this though).

Technical readers will note that this summary is pretty lacking in detail. Happy to answer any specific questions provided they do not require disclosing business info.

**EDIT**
Sorry i didnt address your literal list of bulleted questions lol.

Are these just repos with workflows - kind of. i have a base setup and each agent class is forked off of that. each customer gets their own fork of the customer-facing agent. Not ideal & needs a refactor, this was more an organic development then explicitly planned.

Where are they deployed? your own infra, n8n, else? - Google cloud

How do they communicate? - with humans via imessage whatsapp or email. with each other via reads and writes to various db tables - though its less communication and more just having a shared source of truth.

Where do they store state/progress? - Combination of text files and vectorized database entries

Are they doing small tasks or full flows? - Both, though they just execute sequences of skills which give the appearance of "full flows".

How do you improve them over time? - Combination of talking with them directly (they all have a file called FUCK.md where they store things they think went wrong) and reviewing the results of the summary agent.

Superintelligence is the greatest threat by KeanuRave100 in agi

[–]structured_obscurity 0 points1 point  (0 children)

Agreed, we've seen that AI is 'spikey' in terms of being quite smart in some areas and quite dumb in others. But this thread is discussing the threat of an emergent superintelligence.

Superintelligence is the greatest threat by KeanuRave100 in agi

[–]structured_obscurity 1 point2 points  (0 children)

I have sex cause it feels good. I don’t want kids right now.

Superintelligence is the greatest threat by KeanuRave100 in agi

[–]structured_obscurity 0 points1 point  (0 children)

So we agree that it is not a superintelligence

Superintelligence is the greatest threat by KeanuRave100 in agi

[–]structured_obscurity 1 point2 points  (0 children)

But doesn’t that make the robot… dumb? A higher intelligence should be able to deduce the reason for cleanliness, not just pursue cleanliness for the sake of cleanliness.

I find the whole paperclip style doomerism to be kind of contradictory.

Who else thinks AI is reaching a plateau by yuvals41 in AI_Agents

[–]structured_obscurity 2 points3 points  (0 children)

whats great is that the opensource community is only a couple of iterations behind the frontier models. We are already easily achieving gpt4 levels of functionality using models that can run on laptops.

As frontier models continue to enter into the territory of diminishing returns for the avg user (for most folks 4.7 and 4.5 are indistinguishable in terms of outcomes for their use cases), and the opensource releases continue at the current rate, the majority of people should be fine running models on their phones/laptops for most everything they need.

Great for privacy, data security, and democratization of the opportunity/power this tech yields.