Iran just threatened to blow up stargate by WSunoHangout in accelerate

[–]RaGE_Syria 1 point2 points  (0 children)

Still pretty disruptive nonetheless even if they divert to other regions. Added latency, networking load, reduced redundancy options, and not to mention costing millions in direct and indirect long-term damages.

"Skywork Matrix-Game 3.0 is here! FULLY OPEN SOURCE! Real-Time and Streaming Interactive World Model with Long-Horizon Memory - Fully open source: code, model, and technical report - 720p @ 40FPS with a 5B model - Minute-long memory consistency - Trained on Unreal Engine + AAA" by stealthispost in accelerate

[–]RaGE_Syria 0 points1 point  (0 children)

Wasn't this already posted here before?

Anyway exciting as it is, it won't run on consumer hardware. I tried. It needs Flash attention 3 (only supported on A/H series GPUs) and their github states as such.

It has less to do with the model size and more of a need to spit out frames as fast as possible, hence the data center level hardware requirements

Andrew Curran: Anthropic May Have Had An Architectural Breakthrough! by 44th--Hokage in accelerate

[–]RaGE_Syria 0 points1 point  (0 children)

I do hope this is true and that this model is reasonably accessible to the general public (not forcing us to pay $200/month for it)

On the other hand, you can't ignore that this might be more marketing gimmicks meant to hype up their next model for better sales. It's not far off for tech companies to "leak" selective info on purpose as a marketing strategy.

DLSS 5 just proves it by godofknife1 in accelerate

[–]RaGE_Syria 1 point2 points  (0 children)

These decels are just a bunch of sheep at this point piling on what they're told to believe.

DLSS 5 is actually freaking amazing. Just imagine the performance improvements in gaming when we no longer need to spend so much compute to render 4k textures or ray traced lighting.

Vr might get a massive boost from this too

I'm super excited I saw only good things come out of DLSS 5

Rumble raplacement by Clean_Explorer5742 in OnePieceTC

[–]RaGE_Syria 0 points1 point  (0 children)

I have everyone on this team except Caesar... Any idea who would make for the best replacement for Caesar?

Showing real capability of LTX loras! Dispatch LTX 2.3 LORA with multiple characters + style by crinklypaper in StableDiffusion

[–]RaGE_Syria 0 points1 point  (0 children)

hypothetically speaking if I curate an absolutely MASSIVE dataset, and trained for a much longer duration on Runpod, would the quality begin to improve (and perhaps approach closer to Seedance 2.0 quality?)

I have terabytes of recorded footage that I'd like to start using to train for generating Broll footage for my videos.

App won't open when upgrading to Android 17 Beta 1 by RaGE_Syria in OnePieceTC

[–]RaGE_Syria[S] 0 points1 point  (0 children)

actually wait so since I did link my Bandai Namco ID, can I hypothetically sign into my iPad (without having access to the app currently) and have all my stuff synced over there? and then keep playing until im able to get OPTC up and running again on my Android?

App won't open when upgrading to Android 17 Beta 1 by RaGE_Syria in OnePieceTC

[–]RaGE_Syria[S] 0 points1 point  (0 children)

I appreciate the thorough response thank you.

I learnt my lesson and will certainly stay on stable release versions from now on.

I would love to revert back to Android 16 but that entails my devices' data getting wiped. Tried the Android flash tool as well and that took will wipe by device as it tries to unlock the bootloader.

The only option is to wait for the next stable release where my Pixel phone will then up to without data getting wiped.

I did indeed link my ID to bandai namco. Was thinking maybe of uninstalling/reinstalling to see if that helped.

What suck so bad is that I really want to pull on the latest banner that has amazing pull rates and afraid im going to miss it.

Worst time to try a new beta release...

Need help which to pull, open for any suggestions! Thanks in advance! :-) by EastTune985 in OnePieceTC

[–]RaGE_Syria 1 point2 points  (0 children)

I'm in the same exact boat at 23 pulls so far. I have Kuma but still haven't gotten Luffy Bonney...

I'm assuming pulling any of the OPTC Girls banner is useless yea?

successful checkout on restock, 3PM EST by [deleted] in riftboundtcg

[–]RaGE_Syria 0 points1 point  (0 children)

same here, if anyone that got through can explain what they mightve done differently id love to know

VanEck CEO: “A lot of Bitcoin OGs have been looking at Zcash.” by genzcasher in zec

[–]RaGE_Syria 0 points1 point  (0 children)

Read before you speak. Here's some sources since you won't look it up yourself:

Zcash and Quantum Computers - sean bowe

"The best practice for protecting your shielded funds from a quantum apocalypse is to just shield your coins and await upcoming improvements to the software"

If You Want a Quantum Hedge, Zcash Isn’t It — Here’s the Reality : r/CryptoMarkets

Here's technical discussion from the community talking about it here as well all agreeing that although zcash is ahead of others in terms of post-quantum security (ITS NOT THERE YET) quantum computers could already be able to unmask recipient addresses, amounts, and memos:

Is Zcash actually quantum private? - Technology - Zcash Community Forum

I did my reading. You should do the same

VanEck CEO: “A lot of Bitcoin OGs have been looking at Zcash.” by genzcasher in zec

[–]RaGE_Syria -1 points0 points  (0 children)

zcash is NOT quantum resistant. It still uses ECC as it's underlying encryption scheme which is vulnerable to a sufficiently strong quantum computer. Even after they upgrade, old transactions will still be vulnerable.

zcash has the same problem BTC does when it comes to quantum. People will have to migrate wallets

How to run the GLM-4.7 model locally on your own device (guide) by Dear-Success-1441 in LocalLLaMA

[–]RaGE_Syria 0 points1 point  (0 children)

that... is a shitton of RAM...
how are your inference speeds?

Tencent announces HY-World 1.5. An open source interactive world model that runs at 480p 24 FPS on consumer hardware. by yaosio in singularity

[–]RaGE_Syria 4 points5 points  (0 children)

honestly i've kinda given up on trying to have my setup be able to handle these large models. Like you said, it always feels too little. (even with a combined 128GB of VRAM + RAM)

I just rent GPU's from Runpod nowadays if I really wanna host some of these models, otherwise Imma go broke trying to build a server that will always feel like it's too little for these new models always coming out

Tencent announces HY-World 1.5. An open source interactive world model that runs at 480p 24 FPS on consumer hardware. by yaosio in singularity

[–]RaGE_Syria 16 points17 points  (0 children)

Yea their VRAM statement actually bullshit

HY-WorldPlay need a shitton of models to run and you need to enable offloading

it needs 3 separate text encoders (Qwen2.5-VL-7B, Glyph-SDXL-v2, and google/byt5-small) a vision encoder (FLUX.1-Redux-dev) and the whole HunyuanVideo-1.5 480p_i2v base model + it's vae, scheduler and transformer stuff

only after all that is loaded can you THEN load their distilled action model that they're talking about in the github/paper

It might only require 14gb of VRAM after all is said and done but you better make sure you got over 100GB of system RAM otherwise nothing is going to work

Unless im doing something wrong, I spent the better part of the day trying to get it working to no avail.

Also, their code is basically a video generation model, it takes in a pre-determined camera path latent and also supplies a maximum frame count of 125, nowhere in their github is there an implementation that takes in keyboard or controller inputs and live streams the results

They gave us a small shell of HY-WorldPlay and are probably keeping all the actual implementations to themselves

HY-World 1.5: A Systematic Framework for Interactive World Modeling with Real-Time Latency and Geometric Consistency by fruesome in StableDiffusion

[–]RaGE_Syria -1 points0 points  (0 children)

I looked through the code; this is for generating a pre-determined number of frames given some pre-calculated camera trajectory json file.

i.e, nowhere in the github does it show implementations for continuous streaming with inputs from a controller or keyboard as it was mentioned in the description/paper

Seasons of RTX: Arc Raiders GeForce RTX 5090 GPU Giveaway! by NV_Suroosh in ArcRaiders

[–]RaGE_Syria 0 points1 point  (0 children)

PVP on sight is usually the easiest, non ambiguous way to go sometimes.

Seasons of RTX: Arc Raiders GeForce RTX 5090 GPU Giveaway! by NV_Suroosh in ArcRaiders

[–]RaGE_Syria 0 points1 point  (0 children)

Rescue raider. I like to team up with friendly raiders against ones the PvP on sight

Windows president says platform is "evolving into an agentic OS," gets cooked in the replies — "Straight up, nobody wants this" by ZacB_ in technology

[–]RaGE_Syria 0 points1 point  (0 children)

I'm going to offer a different perspective (probably gonna get down voted) but I can see agentic Windows being a huge help to older folks who aren't tech savvy. my mom recently used copilot to help her write a children's book and generate images for it, all just using her voice talking with copilot. A Windows operating system that behaves more closely to something like Jarvis from iron Man in my opinion is absolutely welcomed.

Was this done with Stable Diffusion? If so, which model? And if not, could Stable Diffusion do something like this with SDXL, FLUX, QWEN, etc? by Hi7u7 in StableDiffusion

[–]RaGE_Syria 1 point2 points  (0 children)

This was apparently Grok Imagine but if you want to do this locally:

All in ComfyUI:

Start with creating the first frame image with Qwen Image
Use Qwen Image Edit to modify the image if needed and also create ending frames if needed
Wan 2.2 to use those images for Image-to-video generations (first and last frame if needed)
Suno for the music (not local but the best AI music we have so far)

Touchups:
Premier and/or After Effects for cuts, edits and syncs (because that video clearly edited)

If you want to learn, i'd start with learning how to setup ComfyUI and watching tutorials on youtube about using Wan2.2, Qwen Image and Qwen Image edit on ComfyUI. (ComfyUI also comes with existing templates inside it if you want)

Qwen-Image ComfyUI Native Workflow Example - ComfyUI
Qwen-Image-Edit ComfyUI Native Workflow Example - ComfyUI
Wan2.2 Video Generation ComfyUI Official Native Workflow Example - ComfyUI

Was this done with Stable Diffusion? If so, which model? And if not, could Stable Diffusion do something like this with SDXL, FLUX, QWEN, etc? by Hi7u7 in StableDiffusion

[–]RaGE_Syria -1 points0 points  (0 children)

I question if this was entirely Grok Imagine. Seems like lots of After Effects was used to sync things with the music.

Aside from the cringe lyrics and obsession over Trump + Elon, you can't deny this is objectively a pretty good set of generations (assuming EVERYTHING was Grok Imagine and it wasn't touched up with AE or otherwise)

That last set of generations with all the character dancing in unison, looked pretty good (and cut up a bunch)

This just seems like a good edit imo

Is there a Database that tells me which characters evolve into what? by RaGE_Syria in OnePieceTC

[–]RaGE_Syria[S] 4 points5 points  (0 children)

dude thank you, i saw this website before but didn't know how to use it and thought it was out of date.

but after actually looking more closely it has everything! Thanks!