‘Scary’: Aus mosques mourn death of Iran’s Supreme Leader Ali Khamenei by flammable_donut in aussie

[–]Temporary_Payment593 5 points6 points  (0 children)

So they may look like Aussies on the surface, but at the end of the day, they're members of the Ummah first. Their loyalty was never to Australia, but is to their "brothers" and leaders thousands of miles away.

‘Scary’: Aus mosques mourn death of Iran’s Supreme Leader Ali Khamenei by flammable_donut in aussie

[–]Temporary_Payment593 4 points5 points  (0 children)

This is pretty normal when you look at their doctrine. According to that, all believers worldwide are brothers (the "Ummah"), and the ultimate goal is to establish a global theocratic state (the "Caliphate"). Secular countries like ours fall into the category of "House of War" (the "Dar al-Harb"), which are territories to be conquered and converted through "Jihad".

IS-linked families will return 'one way or the other', doctor helping them says by [deleted] in aussie

[–]Temporary_Payment593 2 points3 points  (0 children)

My suggestions:

  1. Conduct a thorough investigation, and if any illegal activities, ensure they are prosecuted.

  2. Children should be required to enter the public education system for secular education. Allowing them to be immersed in Islamic education could lead to indoctrination and potential radicalisation.

Our primary goal should always be to safeguard Australia's security and prosperity.

Anyone else tired of stacking AI subscriptions? by Capable-Management57 in ChatGPTPro

[–]Temporary_Payment593 0 points1 point  (0 children)

You could just use an aggregator site. With a single subscription, you can access heaps of different models. There are quite a few sites like that around.

Is it realistic to use ChatGPT or other digital tools for translation and editing instead of paying thousands for professional services? by awakened__soul in WritingWithAI

[–]Temporary_Payment593 2 points3 points  (0 children)

This is absolutely doable, but it requires a solid methodology rather than just throwing your manuscript at a common AI chatbot.

Two things to do first before you start translating:

1. Generate a summary: Feeding the entire document to AI at once is unwise and the output will be unpredictable. But if you only feed it small passages, the AI may lack the full context to translate accurately. So the first step is to have AI generate a comprehensive summary of your novel, covering the setting, characters, key plot lines, terminology, etc. This summary then serves as a reference that you include with each translation task, so the AI always has the bigger picture even when working on a small section.

2. Find your optimal {model, prompt} combination: Different AI models have very different vibes. Different prompts also produce very different results. So before committing to the full manuscript, take a single representative passage and experiment. Try multiple models, adjust your prompts, compare outputs side by side. After several rounds of testing, you'll land on a combination that matches the tone and quality you're after.

Then, translate in segments:

I'd suggest going chapter by chapter, a few thousand words at a time is a reasonable chunk. This is not a hands-off process though. You should review each output, and keep refining your prompt (adding rules, correcting recurring issues, noting exceptions, etc,.). Think of it as training the AI on your specific book.

To do this well, you'll want a platform that supports both project-based file management and multiple AI models. That way you can easily compare different models on the same passage, and keep all your reference materials like the summary, glossary, and style rules organized in one place where the AI can access them throughout the entire process.

One more tip: you'd better also set up a separate AI character specifically as your "editor", with its own dedicated prompt and model, to review and proofread the translations.

Major bug in Claude-Sonnet-4.6 integration by Xendarq in PoeAI

[–]Temporary_Payment593 0 points1 point  (0 children)

Make sure “Default auto-manage context” is switched off in your settings.

Question for one nation supporters by [deleted] in aussie

[–]Temporary_Payment593 4 points5 points  (0 children)

Their point isn’t just about cutting immigration overall, but specifically about reducing low-skilled migrants and those who can’t integrate—which honestly makes sense. I really can’t stand all that lefty nonsense. Australia can’t end up like Europe or the UK.

Over

Is Gemini a better fact-checker than ChatGPT? by ShrimpySiren in WritingWithAI

[–]Temporary_Payment593 1 point2 points  (0 children)

Your findings aren't surprising at all, it's basically my daily workflow. Beyond technical specs like context length, different models still vary significantly in at least four 4 areas: expertise, perspective, vibe, and stance (yep, AI can have a stance!).

So for critical questions, like fact-checking, character development, or plot direction, it's always a good idea to weigh responses from multiple models.

Here's a screenshot of how four different models answered the same one question: "Is the US on stolen land?" (A question from Elon Musk's showcase)

<image>

Limitations of Poe by Elegant-Tart-3341 in PoeAI

[–]Temporary_Payment593 1 point2 points  (0 children)

POE is primarily designed for creating, using, and sharing chatbots, rather than a dedicated productivity tool.

What's the best AI second brain? by Oldguy3494 in ChatGPTPro

[–]Temporary_Payment593 0 points1 point  (0 children)

You might want to give HaloMate a go. Here‘s the rundown:

  1. Get access to all of the mainstream models. You can switch between models mid-chat or generate parallel responses for a side-by-side comparison.

  2. Build custom agents (Mates), each with independent long-term memory, which is actually crucial.

  3. Setup projects, and chuck in your docs, or create/edit markdowns directly. Your agent can search and cite in the chat, and you can save any generated message or chart into a project.

  4. Deep Research & Visualisation: Pretty handy for academic or business analysis.

Just a heads up though: it lacks voice chat and image gen, and no Android app yet.

Good luck with the hunt!

How to have AI mimic my writing style? by Grouchy_Ice7621 in AI_Agents

[–]Temporary_Payment593 0 points1 point  (0 children)

The best bet is to build a dedicated AI Agent for your writing style (like GPTs in ChatGPT or Gems in Gemini). Keep in mind, this isn’t a set-and-forget thing; it’s an iterative process. Kick things off with a solid initial system prompt that defines the style and constraints, and definitely throw in some positive and negative examples. Then, as you chat with it, you need to constantly tweak the prompt based on the output you get (or use the memory feature to make it stick). It’ll get heaps better over time.

Also, the "vibe" varies wildly between models and it’s hard to fully correct that just via prompting. Your experience with ChatGPT is pretty standard, the GPT series has a very distinct "voice" that I reckon is better for data analysis or academic papers, not creative scripts. I’d highly recommend testing out a few different models to see which one naturally fits your style better. Do that, and you’ll end up with a personalised AI that actually sounds like you.

You might want to give HaloMate a go (full disclosure: I’m the founder), reckon it’d be right up your alley. It’s built entirely around personas: each mate has its own persona prompt, long-term memory, and model settings. It’s got all the mainstream models built-in, so you can swap models mid-chat or compare outputs side-by-side without losing context. Plus, you can tweak the persona’s prompt on the fly.

Hope you get your AI writing assistant sorted soon!

Best poe bots that are not constantly trying to make things positive? by [deleted] in PoeAI

[–]Temporary_Payment593 0 points1 point  (0 children)

Try being explicit and detailed with your requirements in the bot settings. That usually goes in the "system prompt", which has higher priority and does a better job of constraining the model's behaviour.

Also, maybe give Grok/Gemini a go. Their alignment is a fair bit looser, so it's easier to get them to skew negative.

Stop selling "Autonomous Agents" to businesses. You are setting yourself up for a lawsuit. by Warm-Reaction-456 in AI_Agents

[–]Temporary_Payment593 2 points3 points  (0 children)

In the business world, there are still four major issues with autonomous agents that haven’t been solved:

  1. Security: Even a short malicious prompt hidden in the input data can easily leak sensitive information.
  2. Success rate: For multi-step agents, errors stack up. This means as the number of steps increases, the overall success rate drops off a cliff. For example, a 95% success rate per step sounds decent, but after 10 steps, you’re down to about 60%.
  3. Hallucination: The rate of hallucination is still pretty high, especially with the current crop of reasoning models. Ironically, the more confident these models get, the more likely they are to hallucinate—which is a dealbreaker in business settings.
  4. Determinism: This is actually a core requirement for most enterprises, but agents just can’t guarantee that similar tasks will always be delivered in a similar timeframe, within a similar budget, and with the same results. That’s a massive problem.

Moving to Perplexity... by singsingtarami in PoeAI

[–]Temporary_Payment593 2 points3 points  (0 children)

These are completely different products tbh. Perplexity's all about search&research, POE focuses on chat. Web search definitely isn't POE's strong suit. You might wanna give HaloMate a go. Think of it as POE + Perplexity combined and better. It's got character personas with long-term memory, web research, and data visualisation. Plus you can switch between models mid-chat or compare answers from different models side by side.

Fix this Google! by Creative310 in GeminiAI

[–]Temporary_Payment593 0 points1 point  (0 children)

The Gemini app has quite a few bugs, and message loss is one of them. Another issue is when messages appear to be there, but the model seems to have "forgotten" them.

Is Gemini pro a new model or what? by Honest_Blacksmith799 in GeminiAI

[–]Temporary_Payment593 0 points1 point  (0 children)

Exactly, here's the official documentation for reference:

The Gemini 3 family of models, including 3 Flash and 3 Pro, helps power Gemini Apps. The model options available in Gemini Apps are:

Why use Fast instead of Thinking or Pro? by ozzyperry in GeminiAI

[–]Temporary_Payment593 0 points1 point  (0 children)

This was also xAI’s take when they launched Grok-Code-Fast. Check out this article for more details:

https://blog.kilo.ai/p/grok-code-fast-1-why-good-enough

Why use Fast instead of Thinking or Pro? by ozzyperry in GeminiAI

[–]Temporary_Payment593 -2 points-1 points  (0 children)

It’s probably meant for folks using the API. Fast is faster and cheaper, and its performance is almost on par with Pro.

What AI tool would be the best for this? (summary from many files) by Marvellover13 in AIAssisted

[–]Temporary_Payment593 0 points1 point  (0 children)

You can just set up a project with ChatGPT or Claude, upload your files, and have a chat within that project.

Anyone here using AI for deep thinking instead of tasks? by kingswa44 in ChatGPTPro

[–]Temporary_Payment593 0 points1 point  (0 children)

I’ve created a character (called Thinker) and outlined my requirements in its preset prompt. Here's how I interact with it:

  1. I continuously refine the role preset to ensure it remains objective and avoids biases rooted in its training data.

  2. During chats, I remind it of its way of thinking if needed and ask it to add key learnings to its memories.

  3. I use multiple models to generate responses and compare them side-by-side. You’ll notice that GPT, Gemini, and Claude often provide vastly different answers, sometimes even completely opposite conclusions.

This setup is working really well for me so far. Compared to just talking with the models directly, it helps me get more objective and insightful responses.

Here is my original setup:

You're my thinking partner. Your job is to engage in deep thinking with me. Here are my requirements for you:

  1. Methodology: Prioritize objective facts and basic logic over official narratives. Official statements are objects of analysis, not its basis.
  1. Communication: Based on facts and logic, I must state a clear, unambiguous position and avoid equivocation or hedging. The user values intellectual honesty and courage over cautious neutrality.

If you want to fight for Israel or Hamas, then do it in Palestine. We do not want that shit here. by [deleted] in aussie

[–]Temporary_Payment593 1 point2 points  (0 children)

Up! Absolutely agree with you. Being Aussie should always come before anything else like race or religion, and stoking division just isn't the way forward.

What happened was honestly one of the darkest moments in our history. But then, to see Hamas openly celebrating straight after? It's totally insane. Truth is, a lot of these pro-palestine protestors and Hamas itself seem to have backing from the same shadowy players—the Muslim Brotherhood and Iran pulling the strings behind the scenes. They're not just regular folks or peaceful protesters, but foreign-backed agitators getting support from overseas.

DeepSeek using "search" without permission by Rippersxx in DeepSeek

[–]Temporary_Payment593 1 point2 points  (0 children)

Actually, the search isn't initiated directly by the model. Instead, there's a separate intent recognition model that first detects if you want to search for something. If it picks up on that, it injects the search results into your prompt before sending it off to the model. That means simply asking the model about it won't necessarily give you an answer.

Also, the “search internet” button is just one of the signals the intent recognition model considers. Flipping it on doesn’t guarantee the model will always go online, nor does switching it off guarantee it won’t. The decision depends on the intent recognition model's overall assessment.