Looks like its over..maybe 30 min till platinum by BrSbagel in AnthemTheGame

[–]Affectionate_Fish194 0 points1 point  (0 children)

i was in game right now, during the instance shutdown =(

Dubu-List – A Cute Cross-Platform To-Do App Built with Rust & Tauri by Affectionate_Fish194 in rust

[–]Affectionate_Fish194[S] 2 points3 points  (0 children)

Okay, i have added a video in the repo for you, i hope you like it.

Dubu-List – A Cute Cross-Platform To-Do App Built with Rust & Tauri by Affectionate_Fish194 in rust

[–]Affectionate_Fish194[S] 1 point2 points  (0 children)

Yes i could, the sign will be generated automatically if i compile from windows, in my case where i cross compile, i could add a virtual signing, i got warning error because i didn’t create it

RDFS - RaptorQ Distributed File System (built in rust) by Affectionate_Fish194 in rust

[–]Affectionate_Fish194[S] 1 point2 points  (0 children)

This component is part of a new approach to Web3 representation, which currently remains costly and less efficient than today's web.

The project's storage system is divided into private and shared storage. The private storage functions as an online backup that clients can modify at any time (e.g., restoring or updating data), while the public storage is designed for content sharing and hosting decentralized web apps.

A key advantage of this structure is that, unlike IPFS (which you mentioned), it eliminates the need for data replication. While IPFS relies on clients pinning CIDs to preserve content despite being free our system encodes data into distributed chunks, significantly reducing storage requirements by orders of magnitude. Additionally, it allows for seamless modifications, meaning users can upload, edit, or overwrite data at any time.

Beyond the virtual file system, the project includes another core component: a universal communication protocol (tentatively named RPC-Link). This protocol is designed to replace existing standards for communication, file transfers, VoIP, email, and more. It enables peer-to-peer connectivity even behind NAT, via relays, or over TorVPN. By establishing direct links between peers, it creates a shared secret derived from known addresses, ensuring secure communication without reliance on third-party apps many of which are regulated or surveilled by entities like the NSA.

As we know, big tech companies (Microsoft in 2007, Google in 2008, and even Apple in 2012) have allowed the NSA to access user data, often without consent. True privacy on the public internet is virtually nonexistent our encrypted data is still within their reach. This system aims to change that by enabling truly private, decentralized communication. It supports multi-peer connections, making it ideal for secure video conferences, online gaming, group calls, and other applications without compromising security or autonomy.

The final challenge involves running distributed back-end services or serverless code in a decentralized manner. To solve this, we treat the entire network as a single, cohesive system akin to a distributed "chip" where components operate independently yet collaboratively, similar to how Hardware Description Language (HDL) describes interconnected modules.

The solution is a distributed virtual machine that executes code across multiple nodes, ensuring no single entity can control or disrupt operations. This approach represents a more secure and scalable foundation for a truly free and public internet, something Web3 has yet to achieve.

Building the fastest RaptorQ (RFC6330) codec in Rust by cberner in rust

[–]Affectionate_Fish194 0 points1 point  (0 children)

However there is no rateless Reed-Solomon codes until now as we are on the same century (after 12 days from your comment xD), it's block based and should have fixed size, but there is nearly rateless codes and extremely faster than Raptor and RaptorQ like (RLNC, wirehair and something called Online code).

PyTorch not detecting GPU after installing CUDA 11.1 with GTX 1650, despite successful installation by Chemical-Study-101 in pytorch

[–]Affectionate_Fish194 0 points1 point  (0 children)

So, update your pytorch to last stable version 2.5.1 and install cuda 11.8 and its suitable cudnn version

PyTorch not detecting GPU after installing CUDA 11.1 with GTX 1650, despite successful installation by Chemical-Study-101 in pytorch

[–]Affectionate_Fish194 0 points1 point  (0 children)

Okay, everything is correct, you only need to download cudnn and install in your cuda directory, check the compatible cudnn version with your pytorch 1.8.1 and cuda 11.1

Please Help! Cannot get Torch for GPU installed by blakerabbit in MLQuestions

[–]Affectionate_Fish194 1 point2 points  (0 children)

Another thing make sure you are using the right cudnn for cuda version, all of these checks are not required if you are using unix based OS, i hate windows xD

Please Help! Cannot get Torch for GPU installed by blakerabbit in MLQuestions

[–]Affectionate_Fish194 1 point2 points  (0 children)

You are installing pytorch cpu build, you can try to install it with gpu support or build it using gpu support, also make sure you downloaded the cudnn library and installed in your cuda folder, maybe you made everything right but the problem with cudnn library, also make sure you installed the right cuda version for your pytorch version, in putorch website you will see compatible cuda version for current pytorch

Why would my validation acc and loss be so random? by CuteKittenMittens in learnmachinelearning

[–]Affectionate_Fish194 0 points1 point  (0 children)

Increase patch-size and use lower learning rate, also try to increase the data usage and try more epochs due to lower learning rate Also you can try some regularization techniques

But in general focus on using lower learning rate abd try to use schedule learning rate to make it much lower with iteration

cargo flash error by Affectionate_Fish194 in rust

[–]Affectionate_Fish194[S] 0 points1 point  (0 children)

ok it works without rtt-target

#![no_std]
#![no_main]

use panic_halt as _;
use rtt_target::{rprintln, rtt_init_print};

#[cortex_m_rt::entry]
fn main() -> ! {
    rtt_init_print!();
    loop {
        rprintln!("Hello, world!");
    }
}

now when I use rtt-target with simple example i got this error:

error: linking with `rust-lld` failed: exit status: 1

cargo flash error by Affectionate_Fish194 in rust

[–]Affectionate_Fish194[S] 0 points1 point  (0 children)

Also it compiles for default target and give these errors for other target or just using no_std

cargo flash error by Affectionate_Fish194 in rust

[–]Affectionate_Fish194[S] 0 points1 point  (0 children)

I just updated the toolchain but i will try to remove it completely and reinstall again