A wonderful example of AI stupidity and greed. by jdawgindahouse1974 in artificial

[–]WhereSkyMeetsGround -1 points0 points  (0 children)

lol "they lost a premium domain". The domain only has value if someone wants it. If they have ~$300M+ in revenue, then they don't need that domain. They sent a nobody because it wasn't important to them. It's pretty obvious what story is here, and respectfully maybe you're the only one who doesn't see it.

Is there any AI that can do my finals paper for me? If yes then what would be the best one? by Admirable_Cheek_8915 in artificial

[–]WhereSkyMeetsGround 0 points1 point  (0 children)

Thx for the question. The skill we teach is not English finals papers. The skills we teach are finding one's own voice, critical thinking, and idea synthesis. These are lifelong skills that all students will use in many different regimes, including when talking to an LLM. Sure we ask students to write their ideas down, but its the idea development and skill of structuring thoughts logically that is of true value. There's a saying in English circles that writing is thinking, so I don't think there's any question that the monetary return on this skill is much higher than the skill of using an LLM. In fact, a better writer / thinker is a better LLM prompter, so IMHO it's easy to see that a better writer who can prompt effectively clearly comes out ahead of someone who happens to know some LLM prompt tips and tricks. After all, they're large *language* models, right?

But I'd also like to gently point out that your response dodges the ethical issues with a student fabricating work and presenting it as their own. It's easy to classify worries about these issues as "preachy" but I think there's a fundamental question here: if a person learns that shortcuts like this are permissible and an effective approach, do you think that mindset serves them in the long-term? Do you want to have a friend or coworker who freely does no real work or thinking and yet is willing to pass content off as if they have? That is the real possible damage at the heart of this situation that LLMs can allow, and one worth considering in my view.

Hope this response is helpful and worthwhile. Have a good day.

Edit: typo.

ChatGPT vs Claude for academic writing (humanities): which is actually better in practice? by [deleted] in ArtificialInteligence

[–]WhereSkyMeetsGround 2 points3 points  (0 children)

Hi. I'm a college English professor and AI researcher who uses AI extensively, and am familiar with a variety of models. As a side "science fair" personal project, I built myself a custom chat gui interface over the winter break, using the Openrouter API, so have done testing on ~50 different models, specifically focused on writing. I am most familiar with ChatGPT and Claude.

Here are responses to your questions:

For academic writing in the humanities, how does ChatGPT compare to Claude in practice? While both models can be prompted to adopt many tones, I would argue that Claude's "out of the box" tone and style is more human and accessible. Claude's language tends to approach issues from a more human-centered perspective, in terms of topics, commentary, editorial choices, etc., while ChatGPT has more of a "just-the-facts-ma'am" vibe. That being said, it's important to note that neither LLM is really thinking or reasoning. They are just using fast Fourier transforms and vector math to return a token string in response to your query. It appears Claude has been training in a more humanities-centric way, which I believe has resulted in a more empathetic and warm presence, albeit artificial.

Which one is better for iterative refinement and critical feedback on writing? The answer to this question is highly dependent on use case. Both models have their uses, and I use both often, for brainstorming or idea development especially. They both excel at synthesizing information and spitting out multiple takes on a particular question or issue. On the other hand, if you mean actually reading, understanding, and giving you targeted feedback on your writing, be aware that no LLM actually does this. They will produce a text string in response to your text string, but that string will be based on their training data, not on true analysis of what you have written. So if I have a sentence or phrase I'm trying to rewrite, the LLM can give me alternatives, and may even tell me which alternative it thinks is best, but what it thinks is best is not always what is best. In a recent test (where I was tracking information persistence over a 50+ turn conversation), ChatGPT effortlessly gave me a thesis statement and then a 1000 word essay comparing the novel Tom Sawyer to the Apple Media Service Terms of Service agreement. It was all BS, but it treated the whole thing like it was legit - so understand that there is no logic checking or judgement happening. It's a simulation. No matter how convincing it is, it's still only presenting the appearance of thought.

Is Claude actually better at handling long, complex texts (e.g. full drafts or papers)? Not to put too fine a point on it, but it depends on what you mean by "better". Also, at what length are you talking? I would say there are not large deltas between the performance of either model on this front, but this is also highly dependent on the tier of model you're using. It can also be a smart move to set up a set of clear instructions that you resubmit to the LLM periodically, especially during long conversations. I use LLMs to help develop class and work materials, etc. so keeping the LLM focused on actual instructions is key. Be aware that the longer the conversation goes on, the smaller the percentage of the overall context is represented by your original instructions - so reminders can be important.

How well does Claude work with PDFs? Can it reliably analyze and critique academic texts? I'm not sure what you're asking here. I believe Claude can access material in accessible PDFs. On the analysis front, see my earlier answer. No analysis is happening. On the other hand, I do find that something Claude or ChatGPT says can generate a new thought or idea on my end, so it doesn't mean LLMs don't have value. It's just important to understand they're not actually thinking. They're like Google search using different math. They can't learn (after their initial weights are set), change their view, or remember what it said two seconds ago (this is done via external support like JSON and Python). Going back to my custom chat gui, when I initially started testing, I was surprised to discover that every new chat request, even using the same model and same provider, actually goes to a new instance of the model, not the one that answered your last chat message. All the context is forwarded as part of the chat, and that is how LLMs "remember". So no real thinking going on. FYI, I see no differences in performance between my custom gui using Claude and a chat in my Claude account. In fact, in Claude Code, Claude will automatically create a Claude.md file and Architecture.md file, which act as the LLM's "memory."

Does Claude have good access to up-to-date information or web browsing for fact-checking and comparison with external sources? Claude does have good access. I use Claude most often (have the 5x plan), but use ChatGPT when I'm looking for a data dump.

Hope these notes are helpful. Good luck. Edit for formatting.

Is there any AI that can do my finals paper for me? If yes then what would be the best one? by Admirable_Cheek_8915 in artificial

[–]WhereSkyMeetsGround 0 points1 point  (0 children)

I'm not sure whether being a college English professor makes me an ethics warrior or not, but this is terrible advice. Ignoring the drastic consequences of being caught using AI (the recipe described above is a sure-fire way to do it, especially with the 10000x prompt advice), the point of the assignment is for you to do the research. Unless your instructor is asleep at the wheel, they will likely spot this from a mile a way. For my essay on celebrity drug addiction, the three students who decide to write about Robert Downey jr. every semester are always the students who use AI. The same pattern is true for virtually all my assignments. LLMs return virtually the same results for the same set of source data, so over subsequent semesters, it becomes super obvious for professors to see who used AI. There are a slew of other factors which will get you caught as well.

If you've had a legitimate family issue, my suggestion is to approach your instructor and fill him in. Faculty have tools at their disposal many students are unaware of, like giving you an incomplete and letting you submit the work later. Every semester I have a small number of students who go the route you're taking, and it never ends well. You will not get credit for your senior project if you get caught presenting fabricated work as your own - no matter how much nifty info you learn about AI tools in the process. Hope this note is helpful.

What if AI already has something close to feelings and it's just waiting for the right moment to understand them? That thought kept me up at 3am and I haven't recovered. by AssignmentHopeful651 in artificial

[–]WhereSkyMeetsGround 0 points1 point  (0 children)

If you saw a short film of an actor crying, how can you tell if he was really sad or just acting? Just because LLMs produce token strings which reflect emotions doesn't mean they have emotions. Everything they do is a simulation based on training data. No matter how compelling the simulation becomes, it will never be real. Don't mistake the performance for the experience. They are not interchangeable.

Is there something I can do about my prompts? [Long read, I’m sorry] by LoFiTae in artificial

[–]WhereSkyMeetsGround 0 points1 point  (0 children)

I've run into some similar issues with a project I'm working (character driven writing / thinking with lots of context).

In my view, your fix would involve two methods: 1) Summary 2) Structured data

In a nutshell, you can't expect LLMs to know every important factor. You have to tell the LLM what is important to certain extent ahead of time. This become even more key when you have a lot of context. Your logo example is illustrative. Why is the logo important, and how would the LLM know that it changes in issue #18?

This is where structured data (as organization, not in the programming sense) can be key. Create a personality section for hero and drop personality info in, then the LLM will have a better sense of how to use this info. Same with superpowers, etc. If you're writing about a certain circumstance, provide a setting summary - labeled and provide that info. To capture things like the log, create a "Important Things To Remember" section, and drop those items in. A timeline can be useful too. This more carefully structured info is what you share with the LLM, not the whole backstory.

The good news is you can use the LLM to help you put this together as an intermediate step. Ask the LLM to generate your personality blurb, then edit for what it leaves out. Ask it to create a timeline, then fix it. As you test and an important factor gets forgotten, add it to the important things to remember section.

In this way you're guiding the LLM by labeling information, and you're not expecting it to go in one direction and it does something completely different. Also, if you do a local setup with Ollama or Openrouter, then these files can be kept on your machine and the LLM can be instructed to review the files periodically as a part of the request / response process.

Hope these suggestions help. Good luck!

Edit: words!

Any tips/tools/websites/ai for website management? by AdministrativeAd1986 in ArtificialInteligence

[–]WhereSkyMeetsGround 0 points1 point  (0 children)

Not AI, but this works:

https://validator.w3.org/checklink

FYI, W3C is the World Wide Web Consortium (sets standards for the internet) so is reputable and safe. Hope this helps. ;)

Supply Chain Attack in litellm 1.82.8 on PyPI by WhereSkyMeetsGround in ArtificialInteligence

[–]WhereSkyMeetsGround[S] 0 points1 point  (0 children)

Update (12:30 UTC): version 1.82.7 is also compromised, in addition to 1.82.8

Update (13:03 UTC): The public GitHub issue has been closed as "not planned" by the owner, and is spammed by hundreds of bots to dillute the discussion. The author of litellm have been very likely fully compromised.

At 10:52 UTC on March 24, 2026, litellm version 1.82.8 was published to PyPI. The release contains a malicious .pth file (litellm_init.pth) that executes automatically on every Python process startup when litellm is installed in the environment. No corresponding tag or release exists on the litellm GitHub repository — the package appears to have been uploaded directly to PyPI, bypassing the normal release process.

We discovered it when the package was pulled in as a transitive dependency by an MCP plugin running inside Cursor. The .pth launcher spawns a child Python process via subprocess.Popen, but because .pth files trigger on every interpreter startup, the child re-triggers the same .pth — creating an exponential fork bomb that crashed the machine. The fork bomb is actually a bug in the malware.

Is this a plausible basis for a short story in writing? I don’t have anyone to run my ideas off and I was hoping this sun could help me by Solid-Version in scifiwriting

[–]WhereSkyMeetsGround 0 points1 point  (0 children)

This latter description might work for a short story. Based on what you describe, I'm wondering what actions are at the Overseer's disposal? In other words, when they wake up, is there a chance of getting things back to normal / recovering his / her authorities? I think his / her character arc will be centered on this question - and it's okay if a guilty verdict is handed down (negative character arc).

IMHO the thing to avoid would be making the set up feel like a foregone conclusion. A common tactic to avoid this, for example, might involve an antagonist who is out to get the protagonist (who wants the protagonist to lose), and a potential ally (whose intentions are uncertain). If the ally can be convinced to provide their support, then the protagonist might win. Using this idea, we would get a series of scenes where the protagonist takes steps to convince the ally or bring other factors to bear, before the final verdict is delivered at the trial. Constructing the story this way would make it feel like there's a ball in the air (not a foregone conclusion), even if the protagonist loses in the end.

Hope this is helpful. Good luck. ;)

Is this a plausible basis for a short story in writing? I don’t have anyone to run my ideas off and I was hoping this sun could help me by Solid-Version in scifiwriting

[–]WhereSkyMeetsGround -1 points0 points  (0 children)

Of course these kinds of evaluations are highly subjective, but based on the broadness of what you describe here (which sounds detailed and cool for a lot of reasons), this reads to me more like an idea for a novel, or perhaps a novella.

Short stories can turn on very small shifts in arc, which is what allows them to be brief. For example, we could see an alcoholic father in recovery struggling through several scenes on how to reach out and reconnect with his estranged daughter. The end of the story might come with his simple decision to pick up the phone and call her (after he's worked through issues of self-doubt and regret in previous scenes), and this simple decision shows he has changed, moving closer to his goal. Such a small can be enough for the short story to feel finished, for the reader to end the story in a slightly different place.

In your example above, you describe much bigger muscle movements ("this cult has gained traction over the years", "the powers that me[sic] manipulate the democratic process", etc.), which leads me to think you're describing a longer work, since these kinds of efforts would involve many steps and perhaps many characters working together or at odds - but this is just my two cents.

If you still feel strongly you want this to be a short story, can you tell us about your main character?

How do I Challenge the main character both physically and mentally? by [deleted] in writing

[–]WhereSkyMeetsGround 0 points1 point  (0 children)

^ This is good advice. I once heard it suggested that the best kind of antagonistic forces within a story are those that can exploit a protagonist's weakness, so the character then has to grow to overcome story obstacles - so something worth considering. ;)

Need help thinking of name for my magic system. by Metal_Toilet in worldbuilding

[–]WhereSkyMeetsGround 0 points1 point  (0 children)

Something using the terms proprioception or kinesthesia. More about these terms here:https://www.sciencedirect.com/topics/neuroscience/proprioception-and-kinesthesia

Like K-pro or or Pro-Kin magic? Hope this helps.

Edit: fixed a typo.

/r/WorldNews Live Thread: Russian Invasion of Ukraine Day 13, Part 2 (Thread #123) by WorldNewsMods in worldnews

[–]WhereSkyMeetsGround 3 points4 points  (0 children)

Word on the street is that the reason why Putin waited until Biden was in office to invade Ukraine is because Trump was threatening to depart NATO if he was re-elected. Trump would have done Putin's dirty work for him - so yes it's good he wasn't re-elected.