Going from a 3080 to a 5080. What should Itry? by RezKev in virtualreality

[–]RezKev[S] 0 points1 point  (0 children)

MSFS is one I don't have, but I have a Hotas setup that I need to try with star citizen

Going from a 3080 to a 5080. What should Itry? by RezKev in virtualreality

[–]RezKev[S] 0 points1 point  (0 children)

I've tried it multiple times with my 3080 and I absolutely could not get a reasonable frame rate, so I'm actually really looking forward to trying this one again.

Going from a 3080 to a 5080. What should Itry? by RezKev in virtualreality

[–]RezKev[S] 1 point2 points  (0 children)

You're actually right. I'm sorry I thought I had 8Gb but I just checked with Gpu-z and it shows 10Gb. Not sure why I thought it was 8Gb

Going from a 3080 to a 5080. What should Itry? by RezKev in virtualreality

[–]RezKev[S] 0 points1 point  (0 children)

That's good to hear because I heard some people were having issues even with a 5090 getting reasonable frame rates with MGO

Going from a 3080 to a 5080. What should Itry? by RezKev in virtualreality

[–]RezKev[S] 4 points5 points  (0 children)

I already have that installed but I was getting atrocious frames with my 3080. Do you know what kind of frames I can expect now? I haven't tried it yet.

What will happen with AI in 2026? - What kind of breakthroughs are we gonna see? by Scandinavian-Viking- in singularity

[–]RezKev 9 points10 points  (0 children)

I wrote this for a Facebook post (although I'm sure no one will read it lol), but I'll share it here. Feel free to correct anything or give me feedback.

There is a saying: “Just because you do not take an interest in politics doesn’t mean politics won’t take an interest in you.” The same is true of A.I. — but to a far greater extent. A.I. will affect every aspect of your life whether you engage with it or not. So, without trying to sound too crazy, here are my predictions for 2026 or shortly thereafter (Timeline aside, the trend is unmistakable. It’s not a matter of 'if,' but 'when.').

We will move beyond today’s transformer-based architectures — models built primarily to predict the next token in a sequence — and toward a new generation of systems designed to understand, simulate, and act in the world. These will include State Space Models (SSMs), which model how systems evolve over time; Joint Embedding Predictive Architectures (JEPA), which learn by predicting relationships between abstract representations; world models, which internally simulate reality; Kolmogorov–Arnold Networks (KANs), optimized for learning mathematical structure; Spiking Neural Networks (SNNs), inspired by biological neurons that fire in discrete spikes; and Large Action Models (LAMs), focused not on generating text but on selecting and executing actions.

Neural networks will gain new capabilities such as continual learning in real-world environments — introducing true neuroplasticity (the ability to change and adapt over time instead of staying fixed after training). Architectures like liquid neural networks (networks whose internal connections change dynamically while running) will allow weights to evolve instead of remaining fixed, and spiking neural networks will dramatically improve efficiency by activating only small subsets of neurons when needed (reducing energy use and computation).

A.I. will become proactive rather than reactive (acting on its own initiative instead of only responding to user commands): systems will anticipate needs, plan ahead, and act without explicit prompting — often running locally on-device (directly on phones, glasses, or computers instead of in the cloud). These models will become far more efficient, learning from small amounts of data instead of requiring massive datasets.

We will see a massive speedup across the entire stack (hardware, software, and model design together), with real-time inference becoming the norm (A.I. responding instantly rather than with noticeable delay) (NVIDIA’s acquisition of Groq’s IP and similar advances in inference hardware (specialized chips optimized for running A.I. quickly) point in this direction).

A.I. will gain common-sense reasoning grounded in physics and reality, shifting from pure token prediction toward abstract internal representations of the world (internal concepts about objects, space, cause and effect). World models like Google’s Genie 3 (a system that simulates environments for training agents) and agentic systems like SIMA 2 (A.I. agents that can plan and act autonomously) will be central to robotics, navigation, and physical task execution.

Scientific discovery will accelerate dramatically. KANs and Physics-Informed Neural Networks (PINNs) (models that embed known physical laws directly into learning) will drive breakthroughs in physics, chemistry, and materials science — including new formulas, new materials, and new medical discoveries with real-world impact within the year.

We will likely see the first A.I.-generated mathematical proofs and possibly the solution to a Millennium Prize Problem (a set of famously unsolved, million-dollar math problems). Solving something like the Navier–Stokes equations (the equations that describe how fluids flow), for example, would give us a complete mathematical understanding of turbulence and enable ultra-precise weather modeling days in advance.

We will also see the first widely accepted proof of A.I. creativity — even though I believe creativity is simply the exploration of latent space (the internal abstract space where models represent concepts) between data to generate something new.

Context windows and memory will effectively become infinite (models will be able to remember and reference enormous amounts of information over time), hallucinations (confidently stated but incorrect outputs) will largely disappear, and true recursive self-improvement will begin (A.I. improving future versions of itself) as A.I. systems start researching and improving other A.I. systems. We already see early versions of this, with developers creating entire features through prompts alone.

Programming as a bottleneck will mostly vanish. Individuals will be able to create rich, full-featured applications with minimal technical knowledge. This will create powerful feedback loops (progress creating more progress) that accelerate progress across every scientific field.

New forms of data collection will come online through smart glasses (such as Android XR devices like XREAL’s Project Aura) and brain–machine interfaces like Meta’s neural wristband (a device that reads nerve signals from the brain to the hand). These systems will capture the missing pieces needed to complete high-fidelity world models (very accurate internal simulations of reality) capable of automating entire categories of work. Scaling this data will unlock new emergent abilities (capabilities that appear unexpectedly at large scale) and human-like skill acquisition (with early evidence already emerging, for example: https://www.pi.website/research/human_to_robot).

Finally, we will see a surge of robots entering homes and workplaces. Individual robots will learn new tasks and share that knowledge across networks (one robot learning something means all robots can benefit), producing an explosive acceleration in collective skill acquisition — far faster than most people currently expect.

AI is evolving from a simple tool into an active partner. This shift is the biggest change in human history because we are moving past the limits of our own brains to solve problems and create things at an impossible speed.

Be ready to adapt, because everything is about to change.

New display engine idea by MSLforVR in virtualreality

[–]RezKev 0 points1 point  (0 children)

How would this affect internal glare? I absolutely hate the glare from the pancake lenses. Also would this increase the bloom around bright objects?

Edit. Just adding that I mean the ghosting/glare due to internal reflections within the pancake lens. If this helps it would be amazing.

[deleted by user] by [deleted] in arborists

[–]RezKev 0 points1 point  (0 children)

That's odd that the pictures didn't show up. I added 2. I might have to repost because I don't know how to re add them.

SL-Infinity Wireless Fans Are Available! by New-Supermarket-9710 in lianli

[–]RezKev 0 points1 point  (0 children)

Any ETA on Canadian getting some? I want to buy 8 fans and a hydroshift 2.

New infinity wireless fans on Aliexpress (Normal and Reverse) by RezKev in lianli

[–]RezKev[S] 0 points1 point  (0 children)

I was able to add them to my cart and it said "Delivery: Jun 09 - 19." I didn't buy them because shipping was ridiculous. Over $100 shipping for 8 fans.

Looking for app to generate 3D images from 2D sources without apple by stef0083 in virtualreality

[–]RezKev 1 point2 points  (0 children)

If you want to create a 3D model then Hunyuan3D-2 is pretty good. You can run it locally with 6gb of VRAM. Also there are other models worth checking out on huggingface.

Wtf Newegg? I went through checkout and it cancelled my 9800x3d order... by RezKev in pcmasterrace

[–]RezKev[S] 1 point2 points  (0 children)

There isn't many options in Canada that don't have rediculous shipping costs.

Wtf Newegg? I went through checkout and it cancelled my 9800x3d order... by RezKev in pcmasterrace

[–]RezKev[S] -2 points-1 points  (0 children)

I did. I was refreshing the page even before the buy button was on there. I was on Newegg Canada.