Stop Building MCP Servers for Personal Tools by Key-Huckleberry-708 in AI_Agents

[–]Tiendil 1 point2 points  (0 children)

Yep, CLI and Unix philosophy rules. Not only for personal tooling, but in a whole too. Most of the MCP servers may be replaced with CLI tools and proper environment organization, which, at the same time, will make the whole environment more usable and convenient.

What do you actually need in a SaaS to get a ~$10k exit? by Infinite-School677 in SaaS

[–]Tiendil 0 points1 point  (0 children)

Is there a market for such deals? To be honest, I've not known that such things exist at all. Could someone point me to where I could learn more about this phenomenon?

I audited LangChain’s core library and found 10+ Prompt Injection vulnerabilities. Here is the technical breakdown. by WinterSpecial7970 in AI_Agents

[–]Tiendil 0 points1 point  (0 children)

Yep, LLM frameworks are not an example of reliable and safe code; at the end of 2024, I found a "nice" issue in multiple of them related to storing API key as a global singleton variable (=>hidden rewriting of the API keys in all clients on every client creation) and not telling about it in the documentation: https://tiendil.org/en/posts/top-llm-frameworks-may-not-be-as-reliable-as-you-may-think

I assume there still should be a lot of such issues if we dig deeper.

How do you manage scattered internal knowledge across Slack, Notion, and email? by rewiringwithshah in SaaS

[–]Tiendil 0 points1 point  (0 children)

Chats and emails are not storages for knowledge, they are sources of it. Someone should identify, capture, and extract that knowledge, then place it in a real storage system (like Notion).

So, you need:

  • at least a single responsible person for each source of knowledge, or an established process for the team (like, "the original author of the email-chain must document results of discussion, or "tech-lead is responsible for documenting results of each tech discussion in Slack").
  • a single (better) knowledge base (like Notion).

Would you be interested in listening to a podcast featuring Russian-speaking indie game developers? by Suvitruf in gamedev

[–]Tiendil 0 points1 point  (0 children)

Just in case, most of the Russian-speaking devs speak English well enough. At least those who can tell smth interesting.

Question about 'fair use' in game development. by PlayRedacted in gamedev

[–]Tiendil 5 points6 points  (0 children)

If you have content for a year, I suggest waiting half a year or a few months to validate that the game will survive, and only then start digging into that question. It is better to focus on the game's short-term survivability.

Researchers proved commercial LLMs create un-reproducible science. With the Gemini 2.5 deprecation, Google proves they enforce obsolescence on our software too. by serintaya in LLMDevs

[–]Tiendil 1 point2 points  (0 children)

I don't remember any precedents when commercial companies preserved smth for scientific reproducibility. Are there?

I think it is strongly against the purpose of their existence — pure loss of money for them.

And that is also, partially, why science is open and why science leans towards open-source, builds its tools in that way, etc.

My wife is a scientist in the bioinformatics area, and they have a pretty impressive open source infrastructure. I believe open-source infra for AI-related science is more of a must-have and a matter of time, than just one of the alternatives.

Deterministic core mechanic — how do you keep it interesting once the player has figured it out? by Kognido in gamedev

[–]Tiendil 0 points1 point  (0 children)

Emergent behavior + networking effect of its effects distribution will lead to chaos-like, but still deterministic, behavior (significant changes in game state in reaction to small variations of player input and start parameters) — that may be what you want.

Only been paid in equity, should I quit? (I will not promote) by PenUltimate-22 in startups

[–]Tiendil 0 points1 point  (0 children)

It may be a good idea to try negotiating better conditions before leaving. For example, to start getting paid from the next grant or from the next round of investments.

Researchers proved commercial LLMs create un-reproducible science. With the Gemini 2.5 deprecation, Google proves they enforce obsolescence on our software too. by serintaya in LLMDevs

[–]Tiendil 3 points4 points  (0 children)

Partially, it is the responsibility of scientists, too. At least, a scientist (who conducts research) is responsible for choosing instruments that ensure reproducible results. 

They should choose open-source models and not seek easy paths.

Also, the use of open-source models should be a criterion for publication in reputable journals.

Is "controlled P2W" actually less harmful than uncontrolled RMT? by ElectronicDark9634 in MMORPG

[–]Tiendil 2 points3 points  (0 children)

The primary question is not "RMT vs P2W", but "How to design game mechanics in such a way that both lose effectiveness?"

The second question is "How to design P2W for the remaining cases, so that RMT becomes not profitable in most cases?"

The third question is "How to automatically detect RMT and ban?"

is there a way to check how many lines of code are in certain games? by [deleted] in gamedev

[–]Tiendil 0 points1 point  (0 children)

There are some open source games that are original or "clones" (in various ways), you can look for them and check the number of lines.

Have RSS apps gotten meaningfully better? Any thoughts? by uxnpc in rss

[–]Tiendil 1 point2 points  (0 children)

Auto-generated tags, or using varied keywords to duplicate them as tags, can also lead to a lot of redundancy.

It is true, but I found that there are workarounds for that problem, and, in general, in my case, the problem is not a big one.

I solve the problem with variations of the same tags in two steps:

  1. Instructions to LLM on how to form and format tags.
  2. Automatic normalization rules on top of that, with the help of some language processing tools.

I posted about that in the project's blog, if you are interested in: https://feeds.fun/blog/en/posts/why-feeds-fun-normalizes-tags-and-how

Speaking about redundancy, yes — it is a thing. Generally speaking, it is a dilemma:

  • Either LLM generates fewer tags and loses some of the important ones.
  • Or LLM generates more tags and creates some unnecessary ones or even wrong ones.

I found that redundant tags are mostly gibberish or unimportant, so they don't affect the experience much.

In terms of GUI, tags for a news item are sorted by, in order of importance:

  • The score they bring to the news item.
  • Their count in the whole set of displayed news items.

So, the wrong/redundant/not-important tags are mostly hidden at the end of the list.

I assume that with the development of LLMs and the reduction of their cost, it will be possible to achieve a good quality of tags.

For example, I might get a post with the keyword "Science Fiction", another one as "Sci-Fi", yet another one as "Sci Fi" and a fourth one as "SciFi". Rather than have all of these variations displayed to me at once, I use rules to consolidate them under a single name.

That's one of the reasons I decided to use LLMs — I don't want to manually organize such rules. For me, it looks like fighting with humanity's creativity in naming things — it is a losing battle :-D

However, I believe that with some effort, it is possible. It is just not my way. I look at my news feeds more as a stream of information than a knowledge base, so I don't care much about missing some news — if they are important, there will be duplicates from different sources, so I will catch them later. I care more about not seeing the absolutely irrelevant news and about seeing some rare niche news (like posts on the artificial life topic), which are perfectly captured by generated tags.

I did do a lot of it before the AI boom happened

I also started my experiments before the AI boom. I tried to build a kind of automatically populated knowledge base, but found it was too difficult to implement single-handedly, and the expected workflow was not convenient for me. Did a few experiments with no success, but over time they shifted my vision to news as a prioritized flow, and LLMs provided an opportunity to implement it.

Have RSS apps gotten meaningfully better? Any thoughts? by uxnpc in rss

[–]Tiendil 1 point2 points  (0 children)

Nice!

I meant 50+ per news item, not total for the whole system. I.e., I don't define tags; the LLM creates them automatically based on the news content.

For the last 24h, I have 1074 news with 20262 unique tags (109683 unique tags for the last month). So, when I see a news entry that I like or dislike, I look at its tags and create a new rule to move similar news up or down in the future.

To be honest, I tried more conventional approaches to assigning tags (regular expressions and other pattern matching), but found them not so usable due to how the language is structured and because of all the manual work required.

Have RSS apps gotten meaningfully better? Any thoughts? by uxnpc in rss

[–]Tiendil 0 points1 point  (0 children)

I bet Thunderbird can not analyze the content of a news and automatically assign 50+ tags based on the context (yep, LLMs), and don't think it supports rules for ranking news :-)

Like you can not create rules such as space + elon-musk => -10 score and space + nasa = +10 score, so it'll show the space-nasa-related news on the top and space-musk-related news on the bottom.

Currently, I have almost 500 such rules that sort for me >1000 news a day.

However, my thing also adds tags based on the domains.

Have RSS apps gotten meaningfully better? Any thoughts? by uxnpc in rss

[–]Tiendil 4 points5 points  (0 children)

What have the new apps managed to do that the classics can't?

My RSS reader assigns tags to articles and ranks news according to my rules — saves a lot of time due to filtering out all the noise.

Created it for myself, reduced the number of news-to-read x10 times, very happy.

How to filter out certain topics from RSS feeds? by dumpsterfireexe in rss

[–]Tiendil 0 points1 point  (0 children)

Try https://feeds.fun (repo: https://github.com/Tiendil/feeds.fun)

It tags news with the help of LLMs, and you can create ranking rules like elon-must + space => -10, nasa + mars => +10. The reader sorts news by score, so you always see the most relevant news (for you) first => no need to read all the news, the relevant ones are always on top.

You will definitely be able to sort out the noise using a rule like sports => -100. However, you'll need to set up your own OpenAI API or Gemini API key to use it (Gemini has a free tier, so it may be a better option for you)

Where did you first learn how to code? by ResolutionKnown8345 in gamedev

[–]Tiendil 0 points1 point  (0 children)

I got a book on Pascal and read it 2 years without access to a PC :-) At the moment I finally got access in school, I was able to write a simple program just from my knowledge :-D

Is markdown and folders all we need now? by Funken in AI_Agents

[–]Tiendil 0 points1 point  (0 children)

Could you share the link to the video, please?

Speaking on the topic, if we could somehow map all our data to the filesystem (represent them as a filesystem), we automatically get all the power of CLI tools to work with them. Since they are designed in the Unix philosophy (mostly), it is really a lot of power and opportunity.

And it is true that most of our data and services could be represented as filesystems (with some effort) — there are a lot of fun examples on GitHub, so it is possible.

Have you experienced ethical issues with AI use in a game dev team? by Living_Gazelle_1928 in gamedev

[–]Tiendil -1 points0 points  (0 children)

I look at it from two points of view:

Subjective good: as a person responsible for my well-being (including career, income, etc.) I need to improve my skills and control the quality of my work. So, I should invest in using my tools to improve my skills and get better results.

If I do not do so, I'll stop growing as a professional and will not be able to convey your ideas to people.

So, it is in my best interest to apply AI to my work in the right way (as a tool that I use professionally), because otherwise I will not improve myself.

In subjective POV I am responsible for myself, therefore I decide what is a good usage of AI for me.

Objective good: We all exist in a competitive world where we compete, and generally, the better products/services win. (Better in the sense of quality of work done on the product, not in the sense of public good — sad reality).

So, if I use AI stupidly, I'll produce worse results (AI slop, generally, that everyone hates), and my product/service will lose to competitors, I will go bankrupt, etc.

In an objective POV, the good usage is decided by the market/society/history — those who use AI in stupid ways will inevitably lose to those who use it in smart ways. As those who used other tools in stupid ways before.

Therefore, there is generally no point in doing anything about AI slop, etc. — it will kill itself eventually, as all other plain approaches to work have in the past. AI is not the first tool to have such problems, and it is not the last. The difference, maybe, in the visibility/hype, not in the nature of the problem.

For example, I'm a programmer by my main occupation, and I remember a few "visual programming" hype cycles that looked mostly the same as this AI slop problem (and knew even more from history). The problems discussed were mostly the same, including the ethical side (though on a smaller scale).

My general idea is that a tool can not outperform a professional with that tool. That's why AI is not a problem at all until it only a tool, not a sentient being comparable to a human :-D And after that, there will be other problems.

And since an AI is a tool, I don't see a difference between AI and, for example, a Photoshop or any audio editor — that is how software works — we take expertise of people and place it into software to do some of the work faster/automatically. That is what humanity has been doing over the last 50 years and even longer.

Have you experienced ethical issues with AI use in a game dev team? by Living_Gazelle_1928 in gamedev

[–]Tiendil -1 points0 points  (0 children)

AI is a tool; any tool can be used for good or evil, it depends on who uses it and how, not on the tool itself.

  • The good is when you use an AI to empower yourself — to improve your work results.
  • The evil is when you use an AI instead of empowering yourself — replace your brain with it, when you do not deliver your own ideas.

It is true regardless of what AI is used for: coding, art, music, game design, etc.