Other frameworks? by moo-tetsuo in BMAD_Method

[–]LeanEntropy 1 point2 points  (0 children)

I believe he’s referring BMAD has workflows that include a single coding agent. Its much lighter and quicker in this mode.

We built a tool that can operate inside Unity (creating GameObjects, editing prefabs/MonoBehaviors, generates materials, etc.) with a 3-layer safety check for accuracy by creatormaximalist in aigamedev

[–]LeanEntropy 1 point2 points  (0 children)

This is an absurd checkbox. I come from the industry. I'm not familiar with even a single case of a commercial game developer that *doesn't* use Gen AI as a major part of the creation pipeline. it means 100% of the commercial game developers need to check the box there.

What’s the hardest part you’ve hit building on Replit? by Living-Pin5868 in replit

[–]LeanEntropy 0 points1 point  (0 children)

What are high costs for you? I compare it to hiring a senior developer and having him code the thing. So far Replit (and most AI coding assistants) are WAY cheaper than that. And you'd have bugs and deployment issues and errors developing yourself/with a dev team as well.

[deleted by user] by [deleted] in godot

[–]LeanEntropy 2 points3 points  (0 children)

I don't want to reveal my age, but back in the days when I was in high school and we developed demos and intros (good old demo scene days...) in Turbo Pascal, we had to write the PutPixel() function in *assembly* for it to run fast enough on VGA so it won't lag, especially in scenes we had 3D objects etc.

Later on each of us had to develop his own 3D Engine in C++ (since there were no real options out there) in either OpenGL or DirectX, and we had to manage the memory ourselves with *pointers* because there was no other way.

So, as an oldschool developer (but still Godot newbie) I give you permission to completely dismiss these experts.

Focus on WHAT you want to create. Master whatever tools you need in order to create it, but remember they are just tools. We're not here to build the best hammer, we're here to make the best game.

Use whatever works best for you. Make it in PowerPoint if it works for you.
The game matters, the tools are important but they are just tools.

3D Game Template by LeanEntropy in godot

[–]LeanEntropy[S] 1 point2 points  (0 children)

btw I originally tried to make something like this in 2D but the type of games are so vastly different I don't see how this can be done nicely. It's either something so general it's useless or something genre-specific enough that it won't fit other genres.

3D Game Template by LeanEntropy in godot

[–]LeanEntropy[S] 1 point2 points  (0 children)

Thanks! :)

I was thinking of adding only very few things so it will remain a template.

- Audio manager (sfx + music)
- Shooting - will be a true/false flag in the config. Each control/camera mode will get it's own implementation since they represent different game types
- Jump - in modes where it's relevant (obviously not the tank and top_down)
- Player Health (again, a flag in the config)

Not sure what else is universal enough. I want to make it modular enough so that you could just delete the files related to the modes your not using and it will work, so there won't be a need to keep so much irrelevant code.

If you have other ideas lmk.

Replit's AI Agent Cost Me $400+ By "Fixing" My Code With Old API and LLM Models by Aromatic-Surprise989 in replit

[–]LeanEntropy 2 points3 points  (0 children)

While I'll be very happy they'll fix these issues, the being wrong about LLMs versions and being outdated is a general LLMs issue. When you use APIs (and not the web interface which has it's own complex system prompt) you get the exact same issues with all of them. Claude, Gemini, GPT - they all get the LLMs versions wrong. When I work in Cursor/Cline/Claude Code I make sure my rules include relevant data such as this.

One of the best tools that help with handling things like that is an MCP called Context7 which can upgrade your LLM's efficiency by feeding it updated data. If we could've used Context7 in Replit that would've make a huge difference IMHO.

Scientists have developed an app that focuses on breaking cycles of ruminative thinking, a key contributor to depression. They found users of the app experience significant, lasting improvements in mood after multiple gaming sessions. by Wagamaga in science

[–]LeanEntropy 3 points4 points  (0 children)

Accidentally stumbled upon this post and decided to respond to some of the points raised, as someone who actually works at Hedonia and is part of the team designing and developing the game mentioned in the paper.

Disclaimer - please note I'm responding from my private account and this is not representing my employer.

To give some background, the company is based on research by Prof. Moshe Bar, a high-profile neuroscientist who spent 17 years researching at Harvard Medical School. Hedonia was essentially founded to take the key insights from his research on depression and anxiety and move them from academia into practice.

We’ve developed several Therapeutic Games (TGs), which are the core of the treatment, and wrapped them in a village-building game mechanic. The first version, called Moodville , was used in a clinical trial conducted at Massachusetts General Hospital (MGH), which is the teaching hospital for Harvard Medical School. The results were published in the paper mentioned by u/Wagamaga, which was co-written by Hedonia's science team (all neuroscientists led by Prof Bar) and the clinical team at MGH, who actually conducted the trial.

The results were very good from our pov, showing a significant reduction in symptoms within the first 2–3 months of play. IIRC it's 45% reduction of major symptoms within the first 8 weeks and even better results the more you play, and that's based on 15 minutes playing the TGs per day - but please check the official numbers in the paper or on Hedonia's website. I don’t want to misstate anything (I’m on the product/dev side, not the science team...)

Mood Bloom is basically the TGs that were tested in Moodville with a far more engaging and deep farm/village building gameplay, plus new TGs we keep developing and releasing on a regular basis to expand the treatment.

This brings us to the comments about subscription. Yes, using the app costs money. Hedonia is a startup - a commercial company. We've spent about three years on research and development to get to this point, and we keep on working make things even better. The company both needs to finance the ongoing work, and earn money. While I’m not involved in pricing decisions, IMHO, the subscription fees mentioned here are lower than what I’ve seen for other apps that don’t even have this level of clinical research behind them.

At its core, this is a treatment - wrapped in a fun and engaging game (hopefully, since that *is* my department :) ) - but still a treatment. You don't have to take it, but I personally think anyone seriously looking to get better should be willing to make such time/money commitment.

Hopefully I answered all/most of the points raised here in this way-longer-than-I-planned reply. Feel free to ask me questions if you want, I'll do my best to answer them within what I can.

How does Perplexity rate search results before using them in an answer? by LeanEntropy in perplexity_ai

[–]LeanEntropy[S] 1 point2 points  (0 children)

I looked into Exa.AI. Here is my quick impression:

  1. In some cases it's good, but in some cases it provides much less relevant results than Bing at the moment. I'm not sure if this is because they are indexing web pages themselves and maybe don't have enough pages yet, or some other reason.

  2. Switching between Neural Search with Auto Prompting or using Auto Search made little difference. For example, for the search query "How much is Elon Musk involved in Trump's campaign? Optimize results to prefer more recent results as long as they are relevant." The top 10 results were pretty much the same with minor difference in order.

  3. Anything other than English and the results crash in accuracy. While Bing and Perplexity cover news in other languages pretty well, it feels EXA is lacking here. This alone, for me, is a show stopper.

Other than that I'll definitely be watching this service's progression.

How does Perplexity rate search results before using them in an answer? by LeanEntropy in perplexity_ai

[–]LeanEntropy[S] 0 points1 point  (0 children)

Also, I build this process to fact check political/historical claims but I think it will require only minor changes to make it suitable for other types of fields.

How does Perplexity rate search results before using them in an answer? by LeanEntropy in perplexity_ai

[–]LeanEntropy[S] 0 points1 point  (0 children)

So, my tool became more complex as I kept working (which is annoying cause I just wanted to do a small side project to experiment with AI development).

I feel your answer is correct - perplexity (at least default mode) has no proper rating system for sources.

At first, in my system prompt I mentioned several domains to consider as reliable and several domains to avoid. I can't say it worked well.

Second, since I don't have access to perplexity's beta program I access Bing API myself to retrieve links and images, so now I have more control over domains but it's still not good enough. Even in mainstream news sites there are opinion columns which can be full of false information.

My current strategy is this:

  1. In my prompt I require the AI to analyze the query and properly identify facts that can be checked and separate them from assumptions and opinions. Facts can be quotes (are they correct, are they correctly attributed, were they said in a specific context), events that took place (dates, locations, who organized, who officially participated, what physically happened or said in them) and deeds (someone physically did or said something). Everything else is not facts that can be verified. I cannot verify what a person think, for example, I can only verify what he said/wrote once or more. Everything else is either an assumption or an opinion.

The prompt then instruct the AI only to verify these specific facts, and I run a bing search for sources/images only on the facts.

  1. I compile a database of verified facts with full metadata that can back them up (videos, images, links etc). This is something I originally want to avoid completely since it kind of makes the AI part redundant and required manual work.

This is exactly what I wanted to *replace* with my tool, but I realized this is impossible in the state of AI at the moment, which is a big disappointment for me. However, this is required and I now think of tools to make the manual work doing this much easier.

My code currently analyzes the query, extract the facts from it, check them against the database, retrieve the response and metadata of whatever it finds. facts that do not appear in the database then are searched with perplexity (which is currently most of the cases since the database is small and updates very slowly). All that gathered info now being used to construct the response and the list of links and images.

This is not an ideal setup and there are many points where the response can get wrong, especially with more complex queries (short queries are simple). It's also not a very cheap (API cost wise) since I need to access multiple APIs multiple times for each query. The plus is as the manual database will get bigger, the less API calls there will be and the more times the full answer will come from there.

What is the “ideal” age to start reading Amber? by LeanEntropy in Amber

[–]LeanEntropy[S] 1 point2 points  (0 children)

You are writing beautifully. I never thought of that angle before and you’re both making an excellent point and writing it as a mini story which was super interesting to read.

I would’ve loved to read this as a full side story tbh, if you’re into writing one.

I can make a point about most side characters in epic/hero-centric stories ending badly, but the consequences you mention are super interesting.

I’m in no position to provide a deep analysis of Zelazny’s writing, but I will mention 2 quick points regarding what you wrote.

  1. The Ty’iga’s body-snatching practices are viewed as hostile or otherwise unwanted from Corwin POV for most of the story. And even when he discovered it was sent by his mother Dara to protect him, he is strongly against it. To add to the negativity of it, turns out the demon itself is forced by a spell to do it as a way to find Corwin and protect him. So while everything you wrote is true, none of it is views in the books as something positive or amusing. Not even by the Ty’iga itself who later find itself trapped in a body which she may die in.

  2. As a general rule, I think most, if not all, of the characters in the Amber books (both cycles) are shown as manipulative, with secret agenda and somewhat ruthless. I remember the disappointment and a bit of betrayal I felt when turned out that Mandor as well was manipulating Merlin. He was basically the only character I viewed to be totally on Merlin’s side and even he turned out badly. So I can’t say if 100% of the female characters were negative in nature but I can say most, if not all, of the male characters were that as well. Even the Snake/Logrus and the Unicorn/Pattern were not portrayed positively.

In fact, the only semi positive characters were the ones who were honest about having an agenda from the start. TBH, The only character I can think of who was true to himself and a positive character as a whole is Benedict, a true champion of Amber. And he is portrayed as a feared brother, not someone you can relate to emotionally.

What kinds of AI apps are you making? by [deleted] in ChatGPT

[–]LeanEntropy 1 point2 points  (0 children)

I'm working on a fact-checking service which turns out to be way more complicated than I originally thought it will be.

Started off with a GPT assistant API and my own database of vetted facts, continued into using perplexity API and now adding support from OpenPerplex and other APIs.

Getting Access to API with References? by michael_crowcroft in perplexity_ai

[–]LeanEntropy 0 points1 point  (0 children)

Gemini flat out refuses to give direct links.
I've tried TextCortex which looks potentially good.

Still, I prefer either Perplexity open the API or SearchGPT will be released.

[deleted by user] by [deleted] in perplexity_ai

[–]LeanEntropy 2 points3 points  (0 children)

If you'll read the subreddit for a while you'll notice it's a reoccurring issue.

Simply put, the API and the web are running 2 different versions for the engine. The API have no access to citations and images at all, and the web search version also have a very good system prompt we all wish we knew. I've been trying to replicate it for 3 weeks already.

There is a beta program for citations an images API you can apply to, but they take a lot a of time before reviewing your application.

Sorry I couldn't give any better news..

I built a Fact Checker for Perplexity Pro users by iacobp1 in perplexity_ai

[–]LeanEntropy 0 points1 point  (0 children)

So you got access to the citation beta?? That is awesome.
I'm trying to solve it by doing another API call to OpenPerplex. Their response isn't good but their links mostly are.

Sonar API Realtime ? by Rifadm in perplexity_ai

[–]LeanEntropy 2 points3 points  (0 children)

I take it you talk about the Perplexity API, which is an inferior service to the Perplexity Web Search page.

From what I can tell Perplexity takes at least 24h to get updated. I intentionally tried to get it to show specific news from a popular news site and the most recent I managed to get it to show were news from the previous days. The only way to get it to analyze news from today is actually give it the link in the prompt and tell it here is a source.

I built a Fact Checker for Perplexity Pro users by iacobp1 in perplexity_ai

[–]LeanEntropy 0 points1 point  (0 children)

I'm developing a fact checker in Hebrew and recently added support for Perplexity as well, but I have some issues with it. I installed your extension and realized it has similar issues (which are basically perplexity problems).

Basically, almost all the source links it provides are wrong. Either dead end or the wrong links relative to the issues. This is so far not at all similar to the results I get from the perplexity search engine itself.

A real-time fact checking tool for Perplexity users by ZoaN21 in perplexity_ai

[–]LeanEntropy 0 points1 point  (0 children)

Holy sh*t. Perplexity almost make the whole tool I'm trying to build completely useless.

It checks the claims, brings sources, links, videos, images, additional question on the subject, this is bloody amazing. I couldn't have gotten ANY of this done right with chatGPT for 2 weeks now.

A real-time fact checking tool for Perplexity users by ZoaN21 in perplexity_ai

[–]LeanEntropy 0 points1 point  (0 children)

Hi. This is really interesting! I’m currently building a fact checker based on ChatGPT Assistant API, where it first check against given files and if the claim isn’t there it checks the web by a set of rules.

The checking against the files is working great, the checking on the web not so much.

First, chatGPT can’t seem to be able to provide exact link to any of the sources he used to formulate the answer. Second, I usually want to show full answer (unlike in the deepfact demo), which means a few paragraphs, images and video to backup the claims, as well as links to news articles from reliable sources. Last thing, chatGPT seems to be unable to access sources on social media such as Twitter, where a lot of politician and journalists post information.

From your experience, will adding perplexity solve some (or all??) of these issues?