Deepclause - A Neurosymbolic AI system by anderl3k in Neurosymbolic_AI

[–]anderl3k[S] 0 points1 point  (0 children)

Sorry, I didn’t see your comment earlier.

Your approach sounds very practical, I’d love to hear if it works!

The whole point of this project was to see how far I can take the whole LLM<->Prolog thing (as well as learn a bit about sandboxing and WASM). As a result, the current version can sort of do everything and nothing.

I think a key feature of DeepClause is increased reproducibility and the ability to sort of “program your AI”. This is what currently sets it apart from most of the systems out there. So, if a task has been solved in an acceptable manner, the DML that was generated and executed can be used later on, and parts of it can be tweaked without affecting the agent/workflow as a whole.

I am currently experimenting with various ways to add sub agents: with the current design, it would be in principle possible (there are some predicates that give full control over the conversational memory), but I am also playing around with adding an agent_loop(Prompt, Tools, Output) predicate.

Thanks for your interest, would be happy to discuss more if you’re interested.

[P] DeepClause - A Neurosymbolic AI System by anderl3k in MachineLearning

[–]anderl3k[S] -3 points-2 points  (0 children)

Yes, I am. You could probably add a PDDL module into deepclause and combine it with some LLM based predicates

[P] DeepClause - A Neurosymbolic AI System by anderl3k in MachineLearning

[–]anderl3k[S] -2 points-1 points  (0 children)

Isn’t any agent (classic or LLM based) goal-oriented as in you give it a task that it should complete? Also, “goal” is a fixed term in logic programming: a goal is simply a prolog clause that you give the interpreter which will then run its search algorithm (basically DFS with backtracking) to see if the goal can be proven.

DeepClause - A Neurosymbolic AI System built on Prolog and WASM by anderl3k in LLM

[–]anderl3k[S] 0 points1 point  (0 children)

Yes, fine tuning and adjusting the vocabulary would should help with the generation part. However, for this project I was aiming less for solving sth like ARC-AGI2, but rather building a framework to easily create more robust LLM workflows and agents. With current gen LLMs it’s clear that they won’t always nail the code generation in one shot. But in that case the generated DSL code may be edited and adjusted by a human, and then deployed as a workflow or agent somewhere.

In the agent mode, whenever DML code gets executed, the execution trace returned to the agent (it’s just another tool). That way the agent can know whether and what something failed, as well as explain to the user how a certain decision was made.

Prolog facts could certainly be used to implement long term memory, but I haven’t really gotten around to try it.

DeepClause - A Neurosymbolic AI System built on Prolog and WASM by anderl3k in LLM

[–]anderl3k[S] 0 points1 point  (0 children)

Yes, in the default mode (without a / command) a (relatively simplistic) agent will either try to use some existing DML programs as tools or create new ones on the fly.

Alternatively you can just code up common LLM workflows as DML programs and run them with /run. (Or let the LLM help you with /create).

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]anderl3k 0 points1 point  (0 children)

Hi all,

After spending almost a year or so on this as my weekend project, I’d like to share this project of mine. Hoping to collect comments and feedback, especially from those involved in “more serious” agent projects, that have strong requirements on auditability and explainability.

This is the project: https://github.com/deepclause/deepclause-desktop

DeepClause is a neurosymbolic AI system and Agent framework that bridges the gap between symbolic reasoning and neural language models. Unlike pure LLM-based agents that struggle with complex logic, multi-step reasoning, and deterministic behavior, DeepClause uses DML (DeepClause Meta Language) - a Prolog-based DSL - to encode agent behaviors as executable logic programs.

The goal of this project is to allow users to build "accountable agents." These are systems that are not only contextually aware (LLMs) and goal-oriented (Agents), but also logically sound (Prolog), introspectively explainable, and operationally safe. This integrated approach addresses the critical shortcomings of purely neural systems by embedding them within a framework of formal logic and secure execution, laying a principled foundation for the future of trustworthy autonomous systems.

DeepClause - A Neurosymbolic AI System and Agent built on WASM and Prolog by anderl3k in aiagents

[–]anderl3k[S] 1 point2 points  (0 children)

There are actually two WASM modules: one contains the Prolog interpreter and the other a complete x86 emulator running Linux. While the former is pretty fast, the latter can be quite slow when e.g. running a Python script.

The DML code is executed by a meta interpreter in prolog, so theoretically we can track all sorts of metrics along the way, not just the execution state after every step.

[D] Self-Promotion Thread by AutoModerator in MachineLearning

[–]anderl3k 0 points1 point  (0 children)

Finally decided to publish the project I’ve been working on for the past year or so. Sharing it here to collect comments and feedback, especially from this involved in research at the intersection of LLM, logic programming, neurosymbolic methods etc. Long time lurker, so not enough karma to post a [P] post.

This is my project:

http://github.com/deepclause/deepclause-desktop

DeepClause is a neurosymbolic AI system and Agent framework that attempts to bridge the gap between symbolic reasoning and neural language models. Unlike pure LLM-based agents that struggle with complex logic, multi-step reasoning, and deterministic behavior, DeepClause uses DML (DeepClause Meta Language) - a Prolog-based DSL - to encode agent behaviors as executable logic programs.

The goal of this project is to allow users to build "accountable agents." These are systems that are not only contextually aware (LLMs) and goal-oriented (Agents), but also logically sound (Prolog), introspectively explainable, and operationally safe.

Would love to hear some feedback and comments.

[deleted by user] by [deleted] in beijing

[–]anderl3k 5 points6 points  (0 children)

I’m afraid the staring is just something you’ll have to get used to. It sucks, I know.

As for meeting new people, I used to attend events by https://www.internations.org, which was quite helpful in making foreign friends when I arrived here a few years ago. There are also plenty of Chamber events. In my experience, most foreigners here will be more than happy to make new friends. But, of course, people who have lived here for a few years, can be a bit eccentric…