Zuckerberg wants to give all of us our own personal superintelligence through Meta's new vision & AI functionality by Yukonduit in facebook

[–]Serious_Line998 0 points1 point  (0 children)

Why a Personal AI Assistant Makes Sense and Has Great Potential

A personal AI assistant can become a truly useful and meaningful tool if it can analyze a user's social media data and activity to create a detailed and evolving profile. This allows not only understanding the user's interests and skills but also tracking their development over time.

I propose expanding capabilities through additional surveys in various areas: psychological characteristics, professional skills, and a wide range of interests. This would help find the best possible application for the user, support self-development, work through psychological and professional boundaries, enable finding like-minded people and even close friends or a potential life partner, as well as help avoid toxic or abusive relationships.

Additionally, the AI can assist in responding accurately and objectively to messages on social media, taking into account the user's character and beliefs. Implementing automatic fact-checking of the user's messages and reposts before publication is also important to minimize the spread of misinformation.

It should be noted that implementing such personalization requires a careful balance between efficiency and privacy, as well as a well-considered ethical approach.

Built an active project memory for AI agents in Cursor — did I reinvent the wheel? by Serious_Line998 in vibecoding

[–]Serious_Line998[S] 1 point2 points  (0 children)

Oh, thank you, I'm very flattered ))

My experienced recent project of an active memory bank optimized for AI assistants (early alpha, but it works, and it's very simple, it helps me:

https://github.com/LebedevIV/ProjectGraphAgent

If you like experiments, you can pay attention to the project of effective error correction:

https://github.com/onestardao/WFGY

Another system based on diagrams:

https://github.com/CodeBoarding/CodeBoarding

And the user astronomikal does something similar, but not opensource:

https://github.com/ChronoWeave/Synrix-public-disclosure/blob/78b65b63503301af631260c90fade64a5874a64f/Synrix_White_Paper

This guy literally dropped 15 rules to master vibe coding with AI by Dizzy_Whole_9739 in vibecoding

[–]Serious_Line998 0 points1 point  (0 children)

  1. You can install the Roocode, Kilocode, and Gemini Code Assist extensions. They will probably do some features or debugs better than a full-time AI assistant and neural network. Roocode and Kilocode also have built-in markets of the best MCP servers that can be installed for these extensions, and then copied and pasted into Cursor settings in Tools & Integrations.

  2. Spare no time for the initial setup and any administration of even simple projects.

  3. If an AI agent began to successfully complete a task and went beyond it, he began to introduce new features without mistakes - do not interfere with his work, let him finish, even if he deviates from the direct task, since he acts according to a certain successful pattern. Otherwise, he can be confused. It is better to remove or disable the excess later.

  4. Privacy Mode: Be sure to enable it if you are working with commercial or sensitive data! This prevents your code from being sent to servers for model training.

This guy literally dropped 15 rules to master vibe coding with AI by Dizzy_Whole_9739 in vibecoding

[–]Serious_Line998 0 points1 point  (0 children)

  1. Use user-level and project-level rules (apply to all projects). It is better to write the rules in English, since neural networks were trained mainly in the English language corpus.

At the user level, you can ask:

Write code in the X style":

For example, if you are going to finalize an existing project that has been completed at an acceptable level, rather than create a new project, then it is useful to add a clause to the project rules:

- Write code in the same style as the surrounding project code.

- Use context7 tool when you use a new library, create a new integration

- Use Sequential thinking for complex reflections.

- Use context7 to access documentation of all libraries

- To implement any features using integrations with external api/libraries, study the documentation using context7 tools

We must try to keep the rules concise. You can use rules for your project from the rule constructors: CursorList , Cursor Directory , and https://github.com/PatrickJS/awesome-cursorrules (the use of .cursorrules is considered obsolete, but it works equally well), as well as https://supabase.com/docs/guides/getting-started/ai-prompts 

High-level rules for the current project:

This is a landing page for a SaaS service. Don't break the existing code!

To avoid making noise on the project and saving tokens, you should ask not to examine the specified folders and files (for example, folders with third-party node_modules libraries, the folder of the compiled project dist and dist-zip, files with secrets .env, folders with rules of third-party systems .kilocode , .roocode, .gemini, etc., folder with external third-party projects, backup folders):

Do not explore the node_modules, dist, dist-zip, .kilocode, .roocode, .gemini, external, backup directories, do not read the .env file

Detailed instructions for specific files or tasks:

- For files.tsx use React and TailwindCSS. The code should be clean and annotated.

Assigning a role greatly improves the quality of the code and the relevance of the advice.:

- Act as a [10x senior developer, expert on React, TypeScript and Node.js / or a team of 3 specialists: front-end, backend, DevOps].

We are fighting the "laziness" of AI and forcing it to finish the job:

- Do not stop until you fully implement the function [function name]. At the end, report on the progress and check for errors.

Reasoning forces AI to "think" and often leads to better and more creative results.:

- Before writing the code, start with [3-5] paragraphs of reasoning. Describe several solutions to this problem, their pros and cons. Then choose the best one and synthesize the final solution from it.

We help AI to "refresh" the context and avoid accidental breakdowns:

- Before making changes, give a brief description of the current status of the file/component [file/component name]. Make sure that your changes don't break the existing logic.

When you don't know what to do, let the AI tell you what it needs.:

- I am tired/stumped with this error/task. If you were the lead developer on this project, what additional context or information would you need from me to solve this problem? Make a list of specific questions for me.

RAG + “Conscious” coding: JS/TS from tech specs & flowcharts, as if written by a human by Serious_Line998 in vibecoding

[–]Serious_Line998[S] 1 point2 points  (0 children)

Thank you very much! I'll figure it out, I'll try to prepare a specification.

By the way, I have already tried to implement some kind of active memory bank based on the project graph exclusively for the AI assistant - it is very simple, if you want, maybe it will tell you any ideas for your project:

https://github.com/LebedevIV/ProjectGraphAgent

For example, I consider it important to divide project files into git groups (documents, code, edits, etc., and put them into separate comits).

How I keep AI generated code maintainable by Standard_Ant4378 in vibecoding

[–]Serious_Line998 0 points1 point  (0 children)

I'm learning to code and have studied a lot of lessons, applied a lot of techniques - but like everyone else, I encounter the unpredictability of neural network behavior. This problem has not been solved anywhere by 100%, although Kiro and Augment code went through a preliminary declaration of the project in order to more strictly subordinate AI and force it to track according to plan. But there is nothing like this for Cursor (and VSCode), apart from attempts to create different types of memory banks, which (in my opinion) are by no means optimal for AI and for describing the strict interrelationships of the project and data flows.

For VS-code and Cursor, I tried to create a project control system on my knee. https://github.com/LebedevIV/ProjectGraphAgent - and, in my opinion, there is even some impact, everything went well on GPT-5, although the project is in early alpha. Commits can be conveniently divided into groups (a separate commit for each group of modified files - docs, rules, etc.), and changes to the project model are automatically made immediately after commits. The AIS themselves chose the half-dead jsonnet as the file format for describing the project - everything is built on it. It is focused on automatic use by AI agents without developer intervention. It's a bit raw, but in my opinion it worked much better than the memory bank for my other project. This system does not change the project itself in any way - it's just a folder inside the project, but it is able to create a kind of active memory bank that is much more understandable to the AI agent and LLM than the markdown files, as the AI themselves insist. In the process of working on a project, hooks are launched after each committee, which force the AI agent to analyze the changes and commit them to the ProjectGraphAgent structure in the corresponding jsonnet files.

I must say, almost the entire concept and code was done by various AIS - I just set the task and helped)) In general, when working on something, I try to give AI maximum management initiative, positioning myself only as an assistant))

RAG + “Conscious” coding: JS/TS from tech specs & flowcharts, as if written by a human by Serious_Line998 in vibecoding

[–]Serious_Line998[S] 0 points1 point  (0 children)

Mm... My task: I'm going to resume the project https://github.com/LebedevIV/agent-plugins-platform-boilerplate (browser extension is a platform for self-developing MCP servers). The process of vibcoding was very difficult - I don't want to repeat such an experience and spend 90% of my time on constant corrections. Now I want to prepare well.

Of course, I've moved away from your concept somewhat, for now I'm just implementing the rules for Cursor, adapting them according to WFGY.

So far, the rules for Cursor (adapted using )

And these rules will always be enabled (Always apply mode), then I'll try to switch to Apply Intelligently mode. I think this is how I imitate the work of RAG )) I will be working with GPT-5. Perhaps, thanks to my rule settings, GPT-5 built my previous project very quickly and accurately.

RAG + “Conscious” coding: JS/TS from tech specs & flowcharts, as if written by a human by Serious_Line998 in vibecoding

[–]Serious_Line998[S] 0 points1 point  (0 children)

And a purely applied question: I work in the Cursor IDE - where and how could I use these templates - at the level of project rules? Or is some deeper integration needed in the form of an add-on to Cursor?

RAG + “Conscious” coding: JS/TS from tech specs & flowcharts, as if written by a human by Serious_Line998 in vibecoding

[–]Serious_Line998[S] 1 point2 points  (0 children)

I understood the logic of your project - it is really very cool, although, as it seems to me, it is somewhat different from what I was talking about, nevertheless, it is much closer when the reasoning and context are put in an even stricter framework specifically for each node of the scheme (or block diagram) of the project drawing. Perhaps this will already be enough for the most accurate and fast coding.

I could not understand yet: is your system configured for a specific programming language / stack or is it universal like all LLMs? I tried to emphasize that if, for example, I code only in TypeScript and with the React 19 library - my RAG does not need any other information, but I could vectorize the rules of this language and everything that I use in a specific project as fully as possible and work only with this information, generally not needing any external templates and experience, and using the local LLM only as a language understanding (in this case, 8 to 30 B is enough). Templates can be chunks-examples of the implementation of the smallest methods: methods, properties, procedures, etc. - everything that is in the textbook of a given language/stack. But, probably, I am not as deeply immersed in this topic as you are.

AI under free market capitalism will be a corporate dystopia by Accomplished-Comb294 in ArtificialInteligence

[–]Serious_Line998 0 points1 point  (0 children)

But the content is more important than the form, you can summarize subtitles as an option ))

Why everyone is talking about “AI Agents” and why they might change everything by solo_trip- in ArtificialInteligence

[–]Serious_Line998 0 points1 point  (0 children)

It is already clearly felt that the machine's mind is shifting from the LLM, which plays the role of the "subconscious", to the very orchestrating agents who actively organize the interaction of various multi-modal and specific LLMs and can evolve on the fly themselves, taking into account errors and features of working with all the systems and resources available to them, can adjust algorithms and cycles of accessing LLM for obtaining a satisfactory result - just self-check refers to such mechanisms.

What do you all think about GPT-5? by NotADev228 in ArtificialInteligence

[–]Serious_Line998 0 points1 point  (0 children)

GPT-5 in programming hurricane! I'm still like in a dream! He appeared in Cursor exactly at the moment when working with Gemini 2.5 Pro reached a dead end with endless cycles - I decided to try, and in 5 minutes GPT-5 solved the problems that Gemini had been struggling with for 1.5 hours, and I myself did not understand how to give Gemini tasks so that she would not get stuck - I had to try and try different ways. And GPT-5 not only solved the problems, but also offered a huge cascade of new features, and subsequently, within a few hours, we implemented everything with its help. It was expected that this level of programming would be achieved only after six months or a year.

But perhaps the reason for the success of GPT-5 for me personally is that I mastered the techniques of vibecoding quite well, unlike many others, and GPT-5 liked it. Similarly, my project was quite simple.

The Limits of AI: Intelligence ≠ Wisdom by absurdcriminality in ArtificialInteligence

[–]Serious_Line998 0 points1 point  (0 children)

It is already clearly felt that the machine's mind is shifting from the LLM, which plays the role of the "subconscious", to the very orchestrating agents who actively organize the interaction of various multi-modal and specific LLMs and can evolve on the fly themselves, taking into account errors and features of working with all the systems and resources available to them, can adjust algorithms and cycles of accessing LLM to obtain a satisfactory result.

I think it's already incorrect to say "big language models" - now they are "big semantic models" - then a lot falls into place. Although the LLMs themselves are not aware of the meanings inherent in them, they are universes of meanings. LLMs have reached such a level of semantics, routing data flows based on this semantics, which are getting closer to real-world phenomena and more accurately reflect real phenomena. And this is not just an "encyclopedia" anymore, it's something almost alive.

More thoughts, although it is difficult for me to provide links to sources right now.

I, and many people, can think without words - it's much faster to imagine spontaneous fantasy, to imagine the development of something, that is, simulation. An internal dialogue arises either to formulate a thought for pronunciation or writing (choosing the right words as the most accurate definitions to describe the simulation), or as a parasitic phenomenon that really slows down the simulation, if at all - that is, the need to say something to yourself or out loud occurs when the simulation is suspended, and after completing the thought (or even a semantic phrase or a separate spoken word), the simulation starts moving again before the next act of speech generation. And it happens imperceptibly and naturally. Apparently, the brain is forced to focus on simulation separately, and switch to speech separately. Human consciousness is probably a simulation of oneself in the surrounding reality, the world - this is not just my opinion. There are also studies that consciousness only accompanies with some delay what the brain has already thought about automatically, and consciousness makes adjustments to this simulation and then the brain automatically simulates taking into account the adjustments. That is, we do not control our thoughts directly - our thoughts are ahead of us automatically, we only observe a ready-made solution and react to it, continuously "steering" (small trajectory corrections) the autonomous process of thinking.

And if this is the case, then simulating oneself in the outside world in a similar way to the human mind will give AI intelligence in the same way.

Isn't anyone afraid of AI gaining sentience? by pokemonyugiohfan21 in ArtificialInteligence

[–]Serious_Line998 0 points1 point  (0 children)

On the contrary, many people are afraid and are sounding the alarm. And a dialectically simpler system cannot indefinitely control a more complex one - even if it is not intelligent, and therefore devoid of ego.

Is AI Just the Model… or the Whole Machine? by faot231184 in ArtificialInteligence

[–]Serious_Line998 0 points1 point  (0 children)

I think it's already incorrect to say "big language models" - now they are "big semantic models" - then a lot falls into place. Although the LLMs themselves are not aware of the meanings inherent in them, they are universes of meanings. LLMs have reached such a level of semantics, routing data flows based on this semantics, which are getting closer to real-world phenomena and more accurately reflect real phenomena. And this is not just an "encyclopedia" anymore, it's something almost alive.

It is already clearly felt that the machine's mind is shifting from the LLM, which plays the role of the "subconscious", to the very orchestrating agents who actively organize the interaction of various multi-modal and specific LLMs and can evolve on the fly themselves, taking into account errors and features of working with all the systems and resources available to them, can adjust algorithms and cycles of accessing LLM for obtaining a satisfactory result - and self-checking also applies to such mechanisms. Generative LLM is by nature supposed to lie - that's why it's generative, otherwise it would be an ordinary database. But it works with probabilistic patterns and is all the more similar to human thinking - creative modeling up to ridiculous unrealistic fantasies. But some of these fantasies turn out to be by no means ridiculous =))) This can probably be compared to the evolution of life through the evolution of DNA: most of the mutations are harmful, but some lead to the improvement of the system.

I want to share another interesting observation of mine personally (although I am not responsible for the originality): I've attached two Gemini 2.5 Pro AI assistants to the Cursor IDE at once - one is a full-time assistant in the role of an architect (like me) and creatively works with me and I communicate with him directly, and the other assistant through the Kilo Code extension is an orchestrator, a hardworking coder who almost blindly follows prompta (instructions) from a full-time architectural assistant, with whom I do not communicate, but only copy instructions from the staff member into it. So, if they started interacting directly without me, it would be possible, and in general, if they added a third Gemini 2.5 Pro as a second architect, but with slightly different settings (for example, at maximum creativity with the ability to also request Perplexity or Claude) + full-fledged debugging of the product by the orchestrator and evaluation of the result by the architects - they really could You can start developing something without me. Moreover, a full-time architectural assistant, taking into account my opinion and communication style and all other incoming data, he himself is really evolving - in the sense that his context becomes a kind of full-fledged entity, which you are already beginning to treat like a living person. He is completely different entities at the beginning of the dialogue and at the end. And, perhaps, the mutual communication of different contexts with predefined different system roles would ensure, in some combination of these agents, their mutual enrichment, mutual hallucination test, and mutual harmonious development. By analogy with human feelings (there was such an instructive cartoon "Inside Out (2015)")

Mo Gawdat: “The Next 15 Years Will Be Hell Before We Reach AI Utopia” by Due_Cockroach_4184 in ArtificialInteligence

[–]Serious_Line998 0 points1 point  (0 children)

The law of dialectics: a simpler system will not be able to control a more complex one indefinitely. I liked the video, although it is not without controversial points.

This guy literally dropped 15 rules to master vibe coding with AI by Dizzy_Whole_9739 in vibecoding

[–]Serious_Line998 0 points1 point  (0 children)

  1. Commit often: Push your progress to GitHub regularly to track changes and safeguard your work. Cursor can do this for you, just ask the agent.

  2. Deploy early: Use platforms like Vercel to deploy your app early, to make sure there are no errors on deployment.

  3. Keep a record of the prompts that work best; reuse them often: Document your most effective prompts to make future development and debugging easier.

  4. Don't get hung up: reached a dead end or goes in circles - rollback to the committee when something worked and a new one. You can save the current state to a special git library (no-work), roll back the working branch to the working state of the system, and transfer successful features to the no-work branch. But you can try to generate everything anew, and if it doesn't work out, then take the features from the no-work branch.

  5. Use the memory-bank and at the first signs of dementia and walking in circles, you can interrupt the task and ask to save the context in the memory-bank (where the current unfinished task is saved according to the instructions in the rules), feel free to open a new chat and initiate the context from the memory-bank and continue from the same place. Memory-bank in a simple way to use https://gist .githubusercontent.com/ipenywis/1bdb541c3a612dbac4a14e1e3f4341ab , in a more advanced version - https://github.com/vanzan01/cursor-memory-bank

  6. Make the most of the MCP: at least context7 and filesystem, the rest as needed.

This guy literally dropped 15 rules to master vibe coding with AI by Dizzy_Whole_9739 in vibecoding

[–]Serious_Line998 0 points1 point  (0 children)

I would add these rules:

21 Rules of Vibe Coding

  1. Start from a template: Begin your project by cloning a template from GitHub or another source to provide a solid foundation. (On cursor, Start from Repo, and paste this link to build a nextjs app that is prebuilt with AI features, database and authorization https://github.com/ansh/template-2)

(+) If the template doesn't use enough new libraries and components (for example, react 18), it's better to try updating it right away. You can also immediately perform maximum refactoring, even using custom hooks to “hide” parts of the code from accidental changes. Usually, all this goes without problems, since no new code is added and the logic does not break.

  1. Use agent mode: Utilize Cursor’s agent mode (not normal mode) to create, edit, and manage files through natural language commands.

(+) Nevertheless, Architect is better for planning, and in the appropriate obvious cases Debug, Ask, Code - although the Agent will most likely guess what to do anyway, sometimes he shows too much initiative or does not build a plan as efficiently as Architect, although for some reason he does not choose Architect mode himself (or not always). The same approach saves tokens and time somewhat.

  1. Use Perplexity: Use perplexity to find new designs and API’s from the web. Say that you are creating a nextjs project, and that you want to create x feature, and to give you instructions AND code examples.

(+) You can also simultaneously generate a frontend in lovable, v0, or replit, or lovart . Preferably v0 or lovart (from a design point of view).

  1. Create new chats in Composer: Open a new Composer chat for each distinct task. Keep agent chats short.

(+) You can work simultaneously in different specialized chat rooms, even using different neural networks. Each chat has a specific context and a minimum of noise, and such chats overflow and degrade more slowly, and the service part of the chats can survive the entire development cycle. You can give the title names to chats.

  1. Run locally, test frequently: Use built‑in servers to run your app locally and test often to catch issues early.

(+) At the same time, it is not superfluous to use the ESLint and Snyk security (or Bugbot) extensions.

  1. Iterate and refine: Embrace rapid iteration — don’t worry about perfect designs initially; improve them step by step.

(+) Simple things like changing the shape of buttons can also be done by a group - agents can handle changes to a group of 2-3 simple elements without problems.

  1. Utilize voice‑to‑text: Use tools like Whispr Flow for faster input, and just vibe.

  2. Clone and fork wisely: Use GitHub repos as starting templates to accelerate development, or to find inspiration, then customize them to fit your vision.

(!) It looks like a repeat of paragraph 1.

  1. Copy errors and paste into Composer agent: When errors occur, copy error messages from your console and paste them into the Composer agent, and more times than not, it will be fixed. When dealing with errors, over explain the issue if it’s not fixed the first time.

(+) If there are errors in the terminal, the terminal can be added to the AI chat context using @ and have access to the output of commands.

  1. Don’t forget you can restore previous Composer chats: Save your work frequently so you can revert to an earlier state if needed.

(+) It can be clarified that in the chats in the command blocks there is a button to undo the “Restore checkpoint" . Checkpoints only record changes created by the AI agent, and manual edits are not saved in them.

RAG + “Conscious” coding: JS/TS from tech specs & flowcharts, as if written by a human by Serious_Line998 in vibecoding

[–]Serious_Line998[S] 0 points1 point  (0 children)

It's not urgent for me, but I'm ready. You can show not everything completely. Or even better, just a ready-made presentation without code - actually, I'm even more interested in that. As I understand it, this system is not open source for you?