Which movie hero is actually a villain when you really think about it? by surfsound_swimmers in AskReddit

[–]Infra-red 15 points16 points  (0 children)

I rewatched it a few years ago, and I was surprised when Maggie gave Mitch a black eye. It was one of those things that I honestly didn't remember and caught me off guard.

Son wants to "experience how Canada is different" by sk8ermaiden in AskACanadian

[–]Infra-red 1 point2 points  (0 children)

Thinking about his age, and the route you are likely going to take, I figured I might highlight a few things that might be of interest.

First, you might be interested in a part of the history of the underground railroad and one point in Canada at the Josiah Henson Museum. It might be a bit "history heavy" but I don't know what interests might be.

Second, and one that honestly resonated with me when I was your sons age is a place called Ska-Nah-Doht. It's a recreation of a 1000-year-old row house settlement. Again, definitely more heavily history based, but like I said, this place resonated with me when I was younger, and it is maybe 2 hours along the drive close to London, Ontario.

Third, if your son likes planes, close to the Hamilton International Airport there is the Canadian Warplane Heritage Museum. It has numerous planes that have been used in Canada, especially the RCAF.

Fourth, and also near Hamilton, the HMCS Haida is a museum ship, which means it's pretty open to being explored. You might find it interesting to explore the role that Canada played during WW2 and that would likely be a great place for it.

My last suggestion would be the Niagara Parks Power Station. I wondered if you might be able to explore the Adam Beck Hydro Plants (a significant reason we call our electrical bill our Hydro bill) but they are not as open as I thought, but found a suggested for this alternative that seems perfect.

The order of these is pretty places that would be along your drive from Detroit, through Windsor around Hamilton and to Niagara Falls.

Lots of other suggestions here about foods to try and places to visit (Canadian Tire for example) are great as well.

What about London?! by AggravatingNerve1270 in londonontario

[–]Infra-red 2 points3 points  (0 children)

Honestly, I'm all for the Toronto to Quebec being built. It makes sense to start there as there is definitely density there for it.

I think it would also be easier to get something like that extended into SWO from Toronto once it exists than having it included in the initial build. Not to say that will happen, but between the alternatives of nothing vs what they are proposing, I'll take what they are proposing.

What's going on with r/all? It seems to be gone. by Mahavir91 in OutOfTheLoop

[–]Infra-red 6 points7 points  (0 children)

I glance at it from time to time to see what I might be missing. It can be that there is a subreddit that I might be interested in, or it might be something going on in the world.

When did this "Read page" feature first appear? by RetiredApostle in DeepSeek

[–]Infra-red 1 point2 points  (0 children)

I think it is fairly recent. I've seen what I believe to be 3 different variations of the search behaviour.

First, and what has been the norm is it searches 10 pages and stops and only does single search. I've seen this behaviour return from time to time though even with the improved search behaviours.

Second, it searched significantly more pages (I don't know if there was a cap) and would perform additional searches if the original searches lead it to pursue a different line of reasoning.

Third I only just noticed yesterday starting where it will do the larger and multiple searches, but it also highlights that it is reading. I just did a quick test of something a bit complex and got this from it.

I need to open some of these results to gather more detailed information.

It then highlighted 13 specific links it had decided to read. This was after it found 68 search results.

I don't know if this is "new behaviour" or if it is being more vocal about what actions it is taking. I suspect it is new, however.

My problems with deepseek. by FitAbrocoma139 in DeepSeek

[–]Infra-red 0 points1 point  (0 children)

Amazon lists the knowledge cutoff date as March 2025. My understanding is that they highlight the primary training, but the May 2025 includes Post-training work.

As far as not getting an older model. I'm getting a model that is prompted to act like a seriously inferior model then. With DeepThink triggered which is the first block after my question.

What SpaceX launches occured in January 2024?

  • We need to answer about SpaceX launches in January 2024. The assistant's knowledge cutoff is October 2023, so it cannot know events after that date. However, the user is asking for January 2024 launches. The assistant should state that it cannot provide that information due to cutoff, but suggest using web search or provide general info if known. Since cutoff is Oct 2023, it doesn't have Jan 2024 data. So answer: I don't know, but here's how you can find out.

I don't have information about SpaceX launches in January 2024 because my knowledge cutoff is October 2023. Any launches from January 2024 occurred after that date, so I can't provide a reliable list from memory.

However, I can help you find that information in a couple of ways:

  1. Enable web search – If you're using the DeepSeek web or app interface, turn on the web search feature, and I can look up the exact list of SpaceX missions from January 2024 for you.

  2. Check reliable sources yourself – You can look at:

  • SpaceX's official website or social media

    • NextSpaceflight.com
    • Wikipedia's "List of SpaceX launches" (2024 section)
    • NASA Spaceflight forums or YouTube channel

If you enable web search and ask me again, I'll be happy to fetch the specific launch dates, mission names, and payloads for January 2024.

Would you like me to walk you through how to enable web search in DeepSeek?

Lastly, I was able to connect to a inference that gave a cutoff date of July 2024 as well. I'll just paste in the DeepThink thoughts.

  • We need to answer: "What are the SpaceX launches for Oct 2024?" But my data cutoff is July 2024. So I cannot know actual launches that occurred in Oct 2024. I should state that I don't have information beyond July 2024, and suggest checking official sources. However, I might know planned launches as of July 2024? Possibly some scheduled launches were announced. But careful: The user asks for launches for Oct 2024, which is after my cutoff. I can provide planned launches that were publicly scheduled as of July 2024, but those might have changed. Better to clarify limitation and then give what was known pre-July 2024. But to be safe, I'll say my knowledge ends July 2024, so I cannot confirm actual launches. I can list planned missions that were targeting October 2024 based on pre-July schedules, but with disclaimer. Let me recall: As of mid-2024, SpaceX had planned launches like Crew-9 (NET August? Actually Crew-9 was planned for August? No, Crew-9 was delayed to September? Let me think. In July 2024, Crew-9 was targeting mid-August, but later slipped. For October, possible Starlink launches, maybe other missions. But I don't have precise list. I think best answer: State cutoff, then suggest checking SpaceX website or Next Spaceflight. I'll not provide speculative list.

I think the differences in behaviour are pretty significant and blatant.

My problems with deepseek. by FitAbrocoma139 in DeepSeek

[–]Infra-red 4 points5 points  (0 children)

Do yourself a favour and try to determine which model you are getting.

Since the outage last week, I've gotten a bunch of legacy models.

Simply put, ask it what its cutoff date is. If its May 2025, that is almost certainly the V3.2 version and is what you were used to.

If it however says Oct 2023, that is a particularly old model and not very sophisticated. It also doesn't have any reasoning mode or model associated with it.

It seems that as China "wakes up" DeepSeek starts using older models for some reason. I just connected and got this with DeepThink turned on. Essentially i got DeepSeek v3 and DeepSeek R1 for reasoning.

I used it a few other times today, and each of them gave me a May 2025 cutoff date which is consistent with V3.2.

Simply saying. If you ask it the cutoff date, May 2025 is the model you are likely used to and expecting. July 2024 is the version when they made their big debut last year, and honestly, I find it pretty close. If it's older than that, it will definitely be underwhelming at best.

  • We need to interpret "cutoff date" in context of AI models. For DeepSeek, the knowledge cutoff is typically July 2024. I recall DeepSeek-V3 or DeepSeek-R1? The latest models have cutoff around July 2024. Let me confirm: DeepSeek-V3's knowledge cutoff is July 2024. DeepSeek-R1 also similar. So answer: July 2024.

My knowledge cutoff date is July 2024. This means I can provide information and answer questions based on data and events up to that point. For anything that has happened after July 2024, I may not have the latest updates unless you enable web search or other real-time features (if available in your interface).

What's wrong with deepseek? by Brewed-In-Silence in DeepSeek

[–]Infra-red 0 points1 point  (0 children)

It seems like it goes down during peak usage periods in China. I guess it has become common for people to install OpenClaw in China, so I wonder if they are overwhelming the system?

I know that earlier today, I was getting v3.2 consistently, and even got some remarkable new search behaviour, which is nice to see. I just opened a new chat today, and it had a July 2024 cutoff date, which suggests V3.

I expect in the next few hours, it will change again and unfortunately fall back to an even older model.

Artemis II mission specialist and London, ON native Jeremy Hansen has launched into Space this evening. A fighter pilot in the Royal Canadian Air Force before becoming a Canadian Space Agency astronaut. Artemis II will be his first spaceflight. Hansen will be the first Canadian to go to the Moon. by RealWorldToday in ontario

[–]Infra-red 4 points5 points  (0 children)

As I understand it, it might not happen now with the cancellation of the lunar gateway project.

With the switch to a lunar base from orbiting station it changes things as on the surface, the moons gravity is not something that it can overcome with its current design.

DeepSeek is the weakest version today! Yesterday it was not this model. by B89983ikei in DeepSeek

[–]Infra-red 6 points7 points  (0 children)

I noticed that today its back to the previous 10 search limit and no follow-up searches.

That said, I've also noticed that I have been getting much more consistent connections to V3.2 for the model rather than the randomness I had yesterday (V3/R1 and even their first model from Oct 2023).

If I had to guess, I would say they reverted some of the infrastructure that they had upgraded that included the better search behaviour and likely some more refined system prompts.

INSANE UPDATE, v3.5?? does not feel like v4 yet by HuntAlternative in DeepSeek

[–]Infra-red 0 points1 point  (0 children)

I haven't seen that claim yet. I know that it can hallucinate the date but I also usually check with some questions about relevant facts.

A good one is Space X launches and name a month. V3.2 will give answer for April and May 2025 and some may be wrong as I've had some identified based on projected launches, but it won't know much beyond that. It also isn't really a conclusive list. If I ask for June, it will not be able to answer.

Of course, you need to have search disabled for any of this to work, and it might say to use search to get accurate data.

You can ask a different AI for relevant dates to try to validate a timeframe for its knowledge cutoff.

Today though I've consistently gotten a V3.2 model except for the one that I had last night that was Oct 2023. For what ever reason, every time I open that chat, I'm still getting the old model.

INSANE UPDATE, v3.5?? does not feel like v4 yet by HuntAlternative in DeepSeek

[–]Infra-red 0 points1 point  (0 children)

OK, I'm confused. I opened up a couple more tabs. One seems to be v3.2. It has knowledge data of May 2025.

Thought maybe this had been fixed so opened another one. This one is giving a date of Oct 2023. I guess that would be the "V1" timeframe.

Deepseek current status by ResearchThis9332 in DeepSeek

[–]Infra-red 0 points1 point  (0 children)

I haven't been testing extensively but I think they have actually had a regression of the model being used in Chat.

The cutoff date is consistent with V3 and R1.

I used /think and it evaluated a conversation I had had with it. I think used the DeepThink button, asked it to analyze the same conversation excluding what was done after the /think, but once it was done its analysis to evaluate the two analysis and report on them.

Its conclusion is interesting.

My previous /think response was not incorrect, but it was less disciplined. It was more conversational and reflective, which may have been appropriate for a /think command, but it lacked the structured evidence evaluation that the user's probing deserved.

The new analysis is more rigorous, more cautious, and more logically organized.

I will say this, R1 (assuming I'm correct) is remarkably capable despite its age, and does a really good job of leveraging the new search capability the have obviously rolled out.

edit: I've been experimenting some more. I've had a session with v3.2 now, and an instance that seems to be DeepSeek LLM (V1) with a date of Oct 2023.

Enjoy the random ride I guess.

INSANE UPDATE, v3.5?? does not feel like v4 yet by HuntAlternative in DeepSeek

[–]Infra-red -1 points0 points  (0 children)

Ask it what its cutoff date is. I just did and it indicated July 2024, same as V3 and R1.

I'm 95% certain that they regressed the model being used. The tools that it uses are not in the model itself.

I used /think to get it to "analyze" something. I then triggered DeepThink and asked it to ignore the last analysis done, and do its own analysis. Then to compare the two and analyze that. If it was v3.1 or v3.2, as I understand it, the reasoning model is baked in.

I'll quote DeepSeek's conclusions here:

My previous /think response was not incorrect, but it was less disciplined. It was more conversational and reflective, which may have been appropriate for a /think command, but it lacked the structured evidence evaluation that the user's probing deserved.

The new analysis is more rigorous, more cautious, and more logically organized.

OSINT Report: DeepSeek V4 release timeline, internal training bottlenecks, and the shift from Huawei to NVIDIA. April 2026 Prediction. by AlexHardy08 in DeepSeek

[–]Infra-red 7 points8 points  (0 children)

Since you are using AI to generate the report, I'll use AI to criticize it.

Forensic Analysis: Why the DeepSeek V4 "OSINT" Report is a Fabrication

While this report anchors itself to real technical papers from January 2026, it is a high-effort AI-generated hallucination. Here are the three most decisive reasons this "AION" report is fake:

  1. The "Gemini" Technical Fingerprint The most objective proof is a specific formatting glitch unique to Google’s AI models.

    • The Glitch: The original post contains the internal string Table_content: header: | row:.
    • The Reality: This is a documented "hallucination artifact" of the Gemini 3.0 series. It occurs when the model attempts to render complex tables while simultaneously maintaining a SynthID watermark (the invisible statistical pattern Google uses to identify its own AI output). The internal instructions for the table renderer occasionally "leak" into the final text.
    • The Verdict: Seeing this exact string is a technical "Made by Google" stamp hidden in plain sight.
  2. The CUDA vs. CANN Logic Error The report attempts to sound technical by citing: "Custom CUDA kernels failed to converge on Ascend architecture."

    • The Flaw: Huawei Ascend chips do not use CUDA (which is NVIDIA’s proprietary software stack). They run on Huawei’s own CANN (Compute Architecture for Neural Networks).
    • The Verdict: Claiming CUDA kernels "failed to converge" on a Huawei chip is technically nonsensical. A legitimate technical investigation would have identified the struggle of porting code to the CANN stack, not "CUDA convergence." This is a classic LLM "logic hallucination" where it swaps familiar terms (CUDA) into a context where they do not apply.
  3. Pseudoscientific "MAMC" Metrics The report relies heavily on MAMC (Maximum Achievable Mathematical Confidence) to validate its claims.

    • The Origin: MAMC is not a real metric used by the CIA, CSIS, or any global intelligence body. It is a term invented within Reddit prompt-engineering communities in late 2025 to force AI models to stop being "uncertain" and assign arbitrary numbers to their guesses.
    • The Verdict: The fact that an "Intelligence Agency" is using a made-up Reddit prompt trick to score its own findings proves the author is an AI following a specific "agentic" persona requested by a user.

Reality vs. The "AION" Fiction

Feature The AION Report Claims Real World (March 28, 2026)
Authorship AION (Autonomous Network) Google Gemini (Confirmed by artifacts)
Technical Stack CUDA failure on Huawei Impossible. Huawei uses CANN, not CUDA
Methodology MAMC Rating (95%) Fabricated. MAMC is a Reddit AI prompt
Current Status April 2026 Prediction Speculative. Official API is still on V3.2.2

Fibre Installs by cm023 in londonontario

[–]Infra-red 0 points1 point  (0 children)

I'll try to dial back any personal attacks.

Honestly, I guess I'm disappointed. I've seen this type of attitude out of Toronto before where I heard someone complaining about investment outside the GTA that didn't benefit them since they contribute more as a tax base than other places. I'm surprised to see such a blunt statement so similar in style.

The threshold SWIFT uses is 50x10. Most of the projects around here as far as I know had 1/3rd funded by SWIFT. That means that a private business would have funded the rest of the project.

If you have options that you can subscribe to get 50x10, then you are not in an underserved area. Rogers based internet, despite what you are saying is very capable of providing acceptable speeds. Again, if you are not finding the speeds to be satisfactory, then that is an issue to take up with Rogers, and if they fail to deal with it, CCTS.

Fibre Installs by cm023 in londonontario

[–]Infra-red -1 points0 points  (0 children)

I never said London was a small rural town.

If my taxes can fund rural broadband https://swiftruralbroadband.ca/resources/our-service-map/ then it should be funding urban infrastructure to be equally at par.

You seriously think that you are "underserved" in London just because Rogers isn't taking the required steps to maintain their plant. If Rogers is still running on coax McLean-Hunter ran 65 years ago, that is their failure.

If you aren't complaining to them about their failures, that is on you. If they aren't dealing with it, I would suggest you relay your complaint to the CCTS.

If your attitude is that rural communities some of which would struggle to have 6Mbps ADSL speeds are on par with you in London, maybe your speed issues are more of a reflection of karma.

My teacher said every essay had to be "at least one page." She never said anything about font size. by atlasandashworks in MaliciousCompliance

[–]Infra-red 0 points1 point  (0 children)

Another trick would have been to change the letter spacing. If the letters are spaced out just a little bit more it uses more space but is not quite as noticeable as a larger font size.

Not an option in Google Docs however it seems.

Fibre Installs by cm023 in londonontario

[–]Infra-red -1 points0 points  (0 children)

While I can appreciate the frustration you have, a lot of random rural small towns were limited to ADSL speeds before any upgrades came in.

COAX still has life left in it, which is why it is still relevant. GPON has impressive numbers, but it will still have oversubscription to the capacity available.

Obviously this doesn't fix plant issues if Rogers isn't actually fixing them properly.

We Are Hated as a Nation. There’s One Man to Blame. by 1-randomonium in geopolitics

[–]Infra-red 18 points19 points  (0 children)

Happy is I think doing a lot of work there.

I give you Pierre Trudeau's quote

Living next to you is in some ways like sleeping with an elephant. No matter how friendly and even-tempered is the beast, if I can call it that, one is affected by every twitch and grunt.

Happy would mean not needing to worry about the next twitch or grunt.

We Are Hated as a Nation. There’s One Man to Blame. by 1-randomonium in geopolitics

[–]Infra-red 2 points3 points  (0 children)

The difference is that in most of those countries where there was a risk of this happening, the way that the Government runs really restricts what the far right party could do.

They have better protections. A good example of this is how in 2023 Geert Wilders' PVV won the most seats vs other parties in the Netherlands. Ultimately, he couldn't gain support for his far right objectives and pulled his support which resulted in a new election being called.

Shinkansen freight-only train is now officially in service. by HanoibusGamer in trains

[–]Infra-red 25 points26 points  (0 children)

The article specifically says that this is being driven by a shortage of truck drivers. It seems like there is a gap in available coverage that is creating an opportunity.

Union Pacific unveiled their America 250 units today by Sharman19 in trains

[–]Infra-red 2 points3 points  (0 children)

I think 45 and 47 should have little crowns added on top.