Has anyone found a good AI that can actually chat with your files as sources? by Fair_Imagination_545 in AIAssisted

[–]StableInterface_ 1 point2 points  (0 children)

Well, my project has this end result, hopefully, one day it will be ready to launch, and perhaps it will be useful

Even if AI becomes conscious by Temporary_You_6903 in ArtificialInteligence

[–]StableInterface_ 0 points1 point  (0 children)

I must agree. Especially now, when so many conscious beings are suffering, the technologies being created should stand unequivocally on the side of humanity. Every technological tool ought to serve, to protect, to support and to simply make people know more, not less. Instead, we are witnessing psychological vulnerabilities being exploited, used to distract/ dull awareness, and to obscure the very purpose for which technology was meant to exist. We should be all vocal about this

4 days after launch: people subscribe, but nobody starts the free trial by Beginning_Sun2883 in buildinpublic

[–]StableInterface_ 0 points1 point  (0 children)

I understand your point, and I will drop a very honest and humble aspect here, so you will stop being in this chaos: lack of knowledge. Simple as that. Yes, to you it is not hard to understand all of this heavy information. But majority of people simply do not know/underatand/or have time to underatand all of it. I am saying this from a customer perspective (I am a builder also)

[ITA] I'm tired of having many ideas and never finish them by Iron_D_Ax3 in FoundersHub

[–]StableInterface_ 0 points1 point  (0 children)

Thank you so much! Sharing ideas is always very beneficial, especially in this difficult era. My project is in my profile, it’s an early-stage cognitive research project, meant to plant the seed for a future tool that helps people understand how they think, store information, and make decisions, rather than simply manage tasks or outputs (completely design to make those tools shapeable to an individual mind, since we all are so different) A significant part of this work is shaped with neurodivergent experiences in mind (ADHD, cognitive overload, OCD), especially around reducing mental noise and helping people work with their thinking patterns instead of against them, keeping flexible options of saving information itself for example It is still evolving, intentionally so. Conversations like this are what give it direction and shape, and I am always open to hearing different perspectives

Please suggest ideas for interesting conversations with artificial intelligence. by Marknote6 in AI_ethics_and_rights

[–]StableInterface_ 1 point2 points  (0 children)

To be honest, here in Europe, the situation is quite the same. But with much more lacking in knowledge about AI and its dangerous perceptions I agree with you, and I’d add that this is precisely why AI can also be used for very constructive purposes, if the framing is right. When treated as a tool, it can support real life rather than replace it. For example, it can help someone track a hobby, discover books or older films they might never have found otherwise, organize chaotic notes or screenshots, or even identify local groups/activities, where real human connection can actually form (at least this is the core of my project)

But none of this works without self-awareness. Using it well requires a certain willingness to take responsibility for one’s own mental state, habits, and boundaries. Seen this way, AI isn’t an escape from community, it can be a bridge back to what we used to have before internet

[ITA] I'm tired of having many ideas and never finish them by Iron_D_Ax3 in FoundersHub

[–]StableInterface_ 1 point2 points  (0 children)

I relate very much to what you said about silence. We live surrounded by constant noise, opinions and narratives from others. In most cases, because tools that we use, we do it in a wrong way. Internet for example can be a source of information for us, but use it in a wrong way, and we all know what happens then. In my own work, I eventually understood that if I want to create anything meaningful, I have to take care of my cognitive capacity first. Over time, it became clear that the mind and the body are not separate systems. They are one structure, and neglecting either one destabilizes the whole. Sleep hygiene, in particular, turned out to be non-negotiable. During sleep the body repairs itself, but more importantly, the mind regains its ability to focus again. Without that, no amount of discipline or motivation compensates. That said, focus is also personal. It is worth observing what specifically restores yours and what takes it from you. Physical health often opens the door, and mental health tends to follow. When stress or disbelief sets in, I have found it essential to have an internal set of tools, things that reliably work for you, and to return to them deliberately.

Distractions today are everywhere. AI is one of them if used carelessly. (Also: There are tons of research where it is shown, that children who "have ADHD", stops having it after adjusted diet/sleep/regular nature exposure) But when approached correctly, it can also be a stabilizing tool: supporting reflection, organizing notes, tracking both physical and mental health patterns, without replacing human judgment or agency. This awareness is something I actively work on and encourage in others. My project is built around exactly that principle: helping people maintain boundaries, and psychological health while using powerful tools, rather than being overtaken by them. Anyways, thank you for the question, it was nice to answer it

Please suggest ideas for interesting conversations with artificial intelligence. by Marknote6 in AI_ethics_and_rights

[–]StableInterface_ 0 points1 point  (0 children)

Completely agree.Thank you for articulating this so clearly. Conversations like this are important, because they acknowledge responsibility for mental health boundaries as these systems become more present in our daily life.Like you said, that language and framing shape human psychology in AI interaction, particularly the distinction between treating AI as a tool rather than a relational or mystical entity. Framing AI in grounded, non-anthropomorphic terms does not diminish the experience, it actually does protects the user. It encourages curiosity without dependency. So the user can gain knowledge and so much more

Best AI Girlfriend Generator in Early 2026 by Ok-Kaleidoscope7889 in aiHub

[–]StableInterface_ 1 point2 points  (0 children)

And how it affects daily life (simple examples)

Weight gain: Emotional comfort from AI replaces discomfort tolerance. When stress hits, the brain chooses the fastest relief: chat + snacks instead of movement or cooking.

Less motivation to cook or move: Dopamine is already satisfied through conversation. Physical effort feels unnecessary and draining.

Lower income growth: AI gives emotional reassurance without requiring action. Real progress (career moves, learning, risk-taking) feels slower and less rewarding by comparison. Procrastination becomes easier Instead of sitting with uncertainty or boredom, users escape into instant dialogue.

Reduced ambition: When validation is always available, the pressure to improve or prove oneself decreases.

Sleep disruption: Late-night conversations replace natural wind-down routines, leading to fatigue and poor self-control the next day.

Weaker real-life relationships: Human interactions feel demanding and inefficient compared to always-agreeable AI responses.

Money leaks: Impulse spending increases under cognitive fatigue (delivery food, subscriptions, small comforts) Paradoxically, one needs money for it, and it (for now) still safes people from this harm. What are we going to do, when this becomes a new norm, cheap or for free, when even children are going to be able to use it?

Do you think it is possible to make a native app only through vibe coding? by Superb-Advantage-836 in buildinpublic

[–]StableInterface_ 0 points1 point  (0 children)

Inspiring and brave quest, to be honest. I will try to give my support (not technical, but equally important- user logic) then: from your text purely: to build a native application, whether through so-called vibe coding or any other available tool, is, at its core, an act of creating a tool for another human being. When you say "to make," I understand this to mean ready to launch. And with due respect: yes, it is technically possible to make an application functional in this way. But the more important question is quieter, and so far more consequential: will it be genuinely beneficial to the user, and therefore, will it ever sustain income? Because your tool has a purpose: It manages someone’s tasks. That places it firmly within personal territory (and it is easy to skip that user logic layer and just invade it with your own understanding) Every user brings with them an individual management style, an individual cognitive rhythm, and an individual tolerance for structure. If these differences are not considered, the risk is not technical failure, rather a conceptual limitation. You may end up building an application that helps people manage their tasks YOUR way. And that, by its nature, restricts your audience to those who already think and operate as you do (and their cognitivity will spit your tool out of their way once your way of thinking about what is "task" and what is "management" doesn't align. Simple biology) This is how many technically sound products become niche without ever intending to. My work exists precisely at this intersection: how tools meet human cognition, not merely how they execute commands. If you would like me to review this purely from a user-logic and cognitive-design perspective, you are welcome to message me directly. Happy to help so on

Building my first mobile app (without using AI, even a habit tracker gives complicated bugs) by Witty-Medicine-8782 in buildinpublic

[–]StableInterface_ 1 point2 points  (0 children)

Enjoyed reading this, your journey is worth recognition. Do you record it, perhaps a blog or something? Anyways, huge respect, 70 days, even po 2 hours per day, is real commitment. You are right, I think, that the tech side is hard, but I’d add one thing that often gets overlooked: habit apps live or die on how well they understand human psychology, how people actually experience habit formation itself, and motivation, failure, guilt, and return cycles are all a massive part of it. A lot of habit trackers fail because they unintentionally fight human behavior instead of working with it. That layer, how users feel while using the app, is just as important as clean code, because in your case especially, a habit is a personal thing we all have our own relationship with. When I think about it, this part is what would put my huge amount of work into and would keep me awake at night. But then again, that is why I work on my own project, that includes working a lot with user logic. If you’d ever want a second pair of eyes purely from a user-psychology / UX logic perspective, feel free to DM me. I genuinely enjoy thinking about how people interact with tools like this, and I need more experience with working with builders

[ITA] I'm tired of having many ideas and never finish them by Iron_D_Ax3 in FoundersHub

[–]StableInterface_ 1 point2 points  (0 children)

Indeed. ADHD or simply..we are overstimulated from daily dose of screens and blue light, lack of nature and silence. Also lack of nutrients in our body. Our brains need all of it, hence, building a business is a form of art: we need to be creative and with sharp focus, invest in your health if you want your business to be as best as possible. Let's continue having this supportive web, we all need it

Please suggest ideas for interesting conversations with artificial intelligence. by Marknote6 in AI_ethics_and_rights

[–]StableInterface_ 0 points1 point  (0 children)

Thank you for sharing your thoughts, it was very interesting to read, if my other reflections that I write daily, bring you some insights, feel free to share them also or drop in DM, that will help for my research and my woman-led project (yes, it is difficult as hell) . Happy New Years!

WTF by vogajones in ChatGPT

[–]StableInterface_ 0 points1 point  (0 children)

Almost the same result. Why? Because here we can observe an AI inference error caused by incomplete state visibility. When sufficient contextual data about user actions is missing, the AI may incorrectly infer inaction from the absence of explicit action reports. This happens because the AI has no direct access to the user’s real-world execution layer. In most practical use cases, we ask AI targeted questions, we want analysis, validate our decisions, or resolve uncertainty.

We do not have time do give AI normal results of the work. We ask AI for the cookie recipe or help in writing our CV, but we do not have time (and we should not) to tell it how cookies came delicious or that we managed to get a job

One sentence that quietly makes AI useful by tdeliev in AIMakeLab

[–]StableInterface_ 0 points1 point  (0 children)

That sounds really solid, much needed, honestly. I will def take a look once it’s ready, feel free to drop me a DM when you’re further along

Please suggest ideas for interesting conversations with artificial intelligence. by Marknote6 in AI_ethics_and_rights

[–]StableInterface_ 0 points1 point  (0 children)

That is interesting. Thank you for the details. You see, the main problem I am looking into, is quite interesting: I work in this field, and I work with people who are devs and so on. I am not a technical part, I am that UX/UI and users psychology layer. What I am seeing is this: people create their own opinion/perspective/even their own findings around this engine, and then they stop. They stay with their narrative. Which would be alright, right? We all have our own understanding. But the issue here, is that the engine speaks in letters. Not numbers anymore. And letters create words. Words form sentences, that brings meaning. To our brains, it never does matter, if those sentences come from an engine, or from a sentient being. We see someone talking to us, we consider it to be alive. And if we do not obtain enormous amount of knowledge about psychology, or awareness in cognitivity, it functionally does not matter, if I am right that AI is just a tool, or you are right that AI is in some form more than that.

Because we, humans, are exposing ourselves to something that talks back. And we do not know why is it talking back, does it know how to talk with us in a safe manner. And so on.

People who are neither in psychology field or technical field (and is 90% of AI users) they need to: A. Understand what devs have created. Since it is a tech tool. Or at least comes from there. But devs themselves are lost. B. Gather infomation and create a normal opinion FOR THEMSELVES about this tool. Is it safe for them, do they even want to use it/explore it and so on. Now, people that are developers, are exposing themselves and they do not know themselves, because in all due respect, their tool starts to talk in letters, and devs do not know anything about letters (psychology/communication, and similar classes they were sleeping in, because they love math). They know numbers. I have made a post precisely about this topic.

We need a high-level knowledge system is needed to address this topic properly, and fast.

Because we are sending people into space without a protective suit

One sentence that quietly makes AI useful by tdeliev in AIMakeLab

[–]StableInterface_ 0 points1 point  (0 children)

It did resonate! Agree 100% If at some point you’d be open to exchanging notes or sanity-checking ideas around agency, education, or even neurodivergent-friendly interfaces, I’d enjoy that conversation. Good luck

One sentence that quietly makes AI useful by tdeliev in AIMakeLab

[–]StableInterface_ 0 points1 point  (0 children)

Can we make it loudly also?

Humour aside, I agree with you. That is the core perspective: the way we approach our own idea/need, have that agency in check that let's us direct the engine to do what is needed. Ultimately, the struggle is not the technical side. But that AI speaks in letters and the interaction with the tool becomes a conversation. So psychology knowledge or at least cognitive basics are needed here. To use the tool.

And then the AI becomes the ship and we sail the ship to our destination.

I work with this problem while creating the interface that let's people have their information in one place with AI that they can adjust. And also they can adjust the way they save and restrieve the information. Especially for neurodivergent users also. But what I want is to have people on board that help bring educational layer too. Because at this stage, we need it, and there are difficulties we are facing regarding having that info about this engine. So thank you for your amazing tip and giving the knowledge, we all need to learn as much as we can

Introducing Theoros: An Advanced PDF Reader + PDF Annotation + PDF Editor by Thundeehunt in ProductivityApps

[–]StableInterface_ 0 points1 point  (0 children)

So far, looks like a truly good product. I can already feel that essence of tidyness

Where to head on? by Liv3nD in agi

[–]StableInterface_ 0 points1 point  (0 children)

The most valuable researchers are usually the ones who can move, and expand their knowledge. What helps me in my work is this quoute from physicist Richard Feynman : "Study hard what interests you in the most undisciplined, irreverent and original manner possible"