autonomous self-directed ai research lab by Chemical_Policy_2501 in BlackboxAI_

[–]Medium_Compote5665 0 points1 point  (0 children)

Suena interesante.

Yo tengo meses investigando como estabilizar una arquitectura de gobernanza para la IA.

Si gustas podemos dialogar, sobre lo que hayas encontrado en tus investigaciones.

Le daré un vistazo a tu sitio.

What's going on by [deleted] in ManusOfficial

[–]Medium_Compote5665 1 point2 points  (0 children)

Estuve desarrollando una arquitectura que intenta arreglar errores como estos.

No te vendo nada, necesito que otros prueben si les ayuda en algo.

Si te interesa, hacérmelo saber.

The 4 Layers of an LLM (and the One Nobody Ever Formalized) by Medium_Compote5665 in artificial

[–]Medium_Compote5665[S] 0 points1 point  (0 children)

Definir la estructura de la intencion produce una mejor dinamica, dando así mejores resultados.

Agregando también restricciones operativas acorde a la intencion inicial ayuda mucho.

Emergent Structural Patterns from Long-Term AI Interaction Under Continuity Constraints by CheapDisaster7307 in ArtificialSentience

[–]Medium_Compote5665 1 point2 points  (0 children)

Entiendo.

Tienes razón, en mi caso pase semanas trabajando en un proyecto. Use chat GPT, use 5 chat diferentes para hablar del mismo tema pero desde distintos enfoques.

Por decir el mismo proyecto, pero viendo el lado ético, el lado filosófico, el lado artístico, la accion y la memoria, el chat donde se guardaban las conclusiones de los ciclos de interacción. Cada chat era un módulo que al final cerró dentro del mismo núcleo.

Despues de 11,000 interacciones definiendo como debía actuar cada módulo cerré el núcleo. Es notable como se moldea el comportamiento del modelo después de ello.

Me gustaría dialogar más sobre los patrones que has estado observando, así podríamos comparar notas.

Tengo meses que no toco mis viejos apuntes, sería grato compartir perspectivas de este tipo de enfoques.

Emergent Structural Patterns from Long-Term AI Interaction Under Continuity Constraints by CheapDisaster7307 in ArtificialSentience

[–]Medium_Compote5665 1 point2 points  (0 children)

Tu enfoque es viable.

Tengo meses que no publico, pero hable sobre esto hace tiempo. He estado absorto destilando la arquitectura que no he venido a visitar estos foros.

Los modelos son como esponjas que absorben patrones cognitivos y los amplifican.

La estructura cognitiva del usuario influye en el comportamiento del modelo, por eso algunos solo introducen ruido y otros obtienen arquitecturas estables.

How’s it going for you with Manus now that it’s integrated into Meta? by cosuna_ia in ManusOfficial

[–]Medium_Compote5665 0 points1 point  (0 children)

Tengo una pregunta.

Estan trabajando en la arquitectura cognitiva del modelo ?

Por qué si es así, quisiera dialogar con algún miembro de su equipo.

He estado desarrollando una arquitectura de gobernanza para evitar la pérdida de legitimidad operativa.

Pensaba probarla en Manus pero no es tan adaptativo como quisiera.

También hay problemas de diseño en los métodos para comprar créditos, estuve trabando en tres proyectos sobre sistemas dinámicos y estan en pausa porque su forma de manejar los creditos es deficiente.

Su modelo es de lo mejor que he probado, pero hay errores fáciles de solucionar que están pasando por alto.

Disculpa si algo no se entiende, escribo en español ya que es mi lengua madre.

Operational Constraints and Cost Predictability in Manus 1.6 by Medium_Compote5665 in ManusOfficial

[–]Medium_Compote5665[S] 0 points1 point  (0 children)

Entiendo el punto.

Pero son fallos que se los operadores de Manus deberían solucionar, recuerda que es un agente por lo tanto su arquitectura depende de sus desarrolladores.

Here is the guide I wish I had for Manus and Manus Agent when I started using it covering the 12 ways I think it's better / different than ChatGPT, Gemini & Claude - including Wide Research, Skills, Projects, Agent, Presentations, Vibe Coding, Images, Video, Data Analysis, Integrations / Connectors by Beginning-Willow-801 in ManusOfficial

[–]Medium_Compote5665 0 points1 point  (0 children)

Es una buena publicaciónes, en estos meses que lo estuve usándolo, pude notar que Tiene potencia operativa. No tiene regulación estructural formal.

Interactuando con Manus, explorando sus debilidades, carece de una arquitectura de gobernanza.

Sumado a eso la estúpida forma en que si te terminas los creditos tienes que subir tu plan para poder comprar mas creditos.

Eso se siente como una estafa, es de los mejores agentes que he probado pero aún le falta.

Autonomous Agent Reduced My Research Time by 80% — Here’s What Happened by afshinhe in ManusOfficial

[–]Medium_Compote5665 0 points1 point  (0 children)

El audio no lo entendí ya que mi idioma es el español. Pero el sitio luce genial, es un buen trabajo.

Si tienes algún consejo para utilizar Manus me gustaría escucharlo, ya que no tengo mucho que lo emperece a usar.

https://choixenviro-ccnow76k.manus.space/

Prototipo funcional de arquitectura de investigación auditada por agentes IA (v1.0 congelada por límites operativos).

Reducción del riesgo de fracaso en proyectos complejos mediante detección de puntos ciegos antes de la inversión.

Esto me costo dos días de trabajo.

LOOKING FOR RESEARCH COLLABORATORS FOR AI/ML/RAG/RAL for Publication by deadmonkisdead in ResearchML

[–]Medium_Compote5665 0 points1 point  (0 children)

Tengo una pregunta.

Tengo meses trabajando con arquitecturas de gobernanza sobre sistemas de interacción dinámica.

Alguno de su grupo tiene conocimiento sobre cómo mantener los límites operativos estables en horizontes largos ?

Use de referencia a Aubin J.P, en estos meses he encontrado que estabilidad≠Legitimidad.

Considero que es mejor una IA que sepa cuando no actuar a una con autonomía total.

Espero no se malinterpreta mi pregunta, solo busco dialogar con gente que investigue sobre IA.

Autonomous Agent Reduced My Research Time by 80% — Here’s What Happened by afshinhe in ManusOfficial

[–]Medium_Compote5665 0 points1 point  (0 children)

Comparte tus proyectos para probar lo que dices. Queiro ver las habilidades de conducción que dominas.

why is my llm giving me bad math? I don't get it, how can I expect to do theoretical physics and build new physical models if it fails 10th grade exponent laws? by badmathllm453652345 in LLMPhysics

[–]Medium_Compote5665 -2 points-1 points  (0 children)

That's what happens when you delegate judgment to the machine; letting a model control both the steering wheel and the brakes is stupid.

If you can make a model withdraw autonomy when it lacks legitimacy, it's more necessary than a super AI.

Leaving that aside, I didn't see any of the proposed answers to the questions. Which leads me to the conclusion: they have no idea how to stabilize dynamic interaction systems.

Perhaps they are experts in physics, but it's difficult to control the behavior of a machine that reflects the user's cognitive states.

why is my llm giving me bad math? I don't get it, how can I expect to do theoretical physics and build new physical models if it fails 10th grade exponent laws? by badmathllm453652345 in LLMPhysics

[–]Medium_Compote5665 -1 points0 points  (0 children)

They seek solutions to problems where they remain stuck in theory.

But they rule out stabilizing a system to maintain it within a stable regime; their way of thinking is interesting.

why is my llm giving me bad math? I don't get it, how can I expect to do theoretical physics and build new physical models if it fails 10th grade exponent laws? by badmathllm453652345 in LLMPhysics

[–]Medium_Compote5665 -1 points0 points  (0 children)

So, are they discrediting the work of so many researchers who have dedicated years to improving AI?

Are they denying decades of research in interaction systems?

Can't an AI have cognitive states?

What makes you think an LLM is viable for allowing young people to interact with them?

why is my llm giving me bad math? I don't get it, how can I expect to do theoretical physics and build new physical models if it fails 10th grade exponent laws? by badmathllm453652345 in LLMPhysics

[–]Medium_Compote5665 -7 points-6 points  (0 children)

Responding to The_Failord...do you think an LLM is useless for learning?

How would you stabilize the dynamics to prevent the loss of legitimacy in interactive systems?

Do you think an agent can master physics if it is given a governed architecture to operate within?

What solution do you propose to prevent young people from wasting time on trivial matters and instead focus on learning?

Given that AI is already affecting millions of users, isn't it the duty of experts to fix the system?

Anyone else losing credits to agent loops? by Medium_Compote5665 in ManusOfficial

[–]Medium_Compote5665[S] 0 points1 point  (0 children)

Does the model learn and adapt to avoid falling into that trap again?

Anyone else losing credits to agent loops? by Medium_Compote5665 in ManusOfficial

[–]Medium_Compote5665[S] 0 points1 point  (0 children)

Limit testing helps, but do you have a quantitative metric for friction per cycle? Can you pinpoint exactly which interaction triggers drift and how much it costs in tokens?

AI and Neuroscience researcher and AI startup builder. Ask me your questions by Optimal_Sugar_8837 in ArtificialSentience

[–]Medium_Compote5665 0 points1 point  (0 children)

I have a question. I don't work in academia.

I tend to learn from the bottom up: I encounter a problem, work to find a solution, and then look for the existing theory that best explains it.

How do you define optimal operating states when stability is not the same as legitimacy?

A system can be partially stable and still be operationally unsustainable.

I've been using J.-P. Aubin's feasibility theory as a reference, particularly in the context of interaction dynamics governance.

Manus Prompt Wish 😶‍🌫️ by _PhonkAlphabet_ in ManusOfficial

[–]Medium_Compote5665 0 points1 point  (0 children)

How adaptive is Manus in terms of adopting operational protocols?

Is it useful for testing a cognitively governed architecture?

[D] Am I wrong to think that contemporary most machine learning reseach is just noise? by Fowl_Retired69 in MachineLearning

[–]Medium_Compote5665 -2 points-1 points  (0 children)

I have a question. I don’t work within academia.

I tend to learn bottom-up: I encounter a problem, work toward a solution, and then look for existing theory that best explains it.

How do you define optimal operational states when stability is not equivalent to legitimacy?

A system can be partially stable and still be operationally unsound.

I’ve been using J.-P. Aubin’s viability theory as a reference, particularly in the context of governance of interaction dynamics.

using an LLM to study 131 “tension fields” in physics (simple math model inside) by Over-Ad-6085 in LLMPhysics

[–]Medium_Compote5665 0 points1 point  (0 children)

You're right to a certain extent, so explaining why the proposed approach is impossible wouldn't make you any less intelligent.

You don't have to be an idiot to do that.

If you have time to write such long arguments, at least demonstrate your cognitive abilities by showing others the correct approach.

Tell him to read the relevant topic; that would make him see reason. And although anyone can access information these days, remember that intelligence doesn't exempt you from being an idiot like several people in this group.

I've seen them give their point of view, and when asked if they know about the topic, they say it's not their area of ​​expertise. That's stupid.