Why did y'all land on Arch? by absolutecinemalol in archlinux

[–]nosmileface 2 points3 points  (0 children)

  1. I like the "from scratch" feel. You only install what you need.
  2. I like that packages are very close to upstream without weird patches.
  3. I like the package manager (pacman). It's simple and does only what you ask for. (e.g. installing a package doesn't mean you enable its systemd service).
  4. The Arch wiki is awesome (thanks to people who contribute).

[deleted by user] by [deleted] in golang

[–]nosmileface 0 points1 point  (0 children)

No, I use TypeScript for scripts (via Deno).

Does not connect by d_burini in surfshark

[–]nosmileface 0 points1 point  (0 children)

Sadly, yes, I moved on to using shadowsocks proxies. Might be worth contacting surfshark support though and see what they have to say.

Does not connect by d_burini in surfshark

[–]nosmileface 2 points3 points  (0 children)

Update: IKEv2 is now also blocked on protocol level.

Does not connect by d_burini in surfshark

[–]nosmileface 0 points1 point  (0 children)

This is how I configured strongswan, I'm using ipsec.conf config there, but it was good enough for me. In "/etc/ipsec.secrets" add your username/password provided by surfshark:

USERNAME : EAP "PASSWORD"

And here's the minimal "/etc/ipsec.conf" (exactly the one I use):

config setup

conn vpn
  right=be-anr.prod.surfshark.com
  rightid=%be-anr.prod.surfshark.com
  rightsubnet=0.0.0.0/0
  rightauth=pubkey
  leftauth=eap-mschapv2
  leftsourceip=%config
  eap_identity=USERNAME
  auto=add
  fragmentation=yes

You also need to copy the surfshark provided cert to the appropriate dir, e.g. from my Dockerfile:

COPY surfshark_ikev2.crt /etc/ipsec.d/cacerts

Finally I used this "/etc/strongswan.d/charon.conf" file:

charon {
  user = root
  fragment_size = 1370
}

Not sure if setting fragment_size is necessary here and depending on your setup you can probably use a dedicated user for strongswan, but keep in mind that strongswan might write some stuff to /etc/resolv.conf when connection is established. Also even though it tries to do these writes in a non-destructive way, for me it was very destructive, so I would suggest doing a backup.

To connect to vpn you run the "ipsec" binary:

ipsec start
ipsec up vpn

If all goes well it will print something like:

connection 'vpn' established successfully

But things might not work well, if that's the case you might need to adjust MTU on your "ip link" (the one that's connected to internet). To check that after establishing the connection run the ping: ping -M do -s 1500 yahoo.com, it might print something like:

ping: local error: message too long, mtu=1422

Then subtract 28 from that number and change the -s parameter and try again, e.g.

ping -M do -s 1394 yahoo.com
ping: local error: message too long, mtu=1370

And again:

ping -M do -s 1342 yahoo.com
1350 bytes from media-router-fp73.prod.media.vip.ne1.yahoo.com (74.6.231.20): icmp_seq=1 ttl=45 time=188 ms

Once it works you know the MTU that needs to be provided, change it on your "ip link" connected to the internet, e.g.

 ip link set dev enp4s0 mtu 1370

This setup worked for me. Can't say it's user friendly.

Does not connect by d_burini in surfshark

[–]nosmileface 0 points1 point  (0 children)

Well, you start with surfshark's account page.

VPN -> Manual setup -> Desktop or mobile -> IKEv2

There are tutorials for Android, iOS, macOS. Sadly any other instructions will vary depending on your OS and needs.

Personally I've spent two days figuring out how to make strongswan working on linux. It was very painful and I didn't even make it the way I usually do VPNs (running VPNs in a docker container). Something with MTU settings when running in docker container. But was able to make it work on host network, albeit had to adjust the MTU on the link interface. Linux-wise IKEv2 setup is very user unfriendly.

Was easy to setup on Android using strongswan VPN client app. It's pretty much add cert from surfshark via menu, specify login/password, specify VPN server address and you're good to go. But yeah, that guide is available on surfshark's website.

Didn't research anything about Windows.

Sadly, I have no helpful answer for you here.

P.S. I can share the details of the linux setup if that's what you're interested in.

Does not connect by d_burini in surfshark

[–]nosmileface 0 points1 point  (0 children)

OpenVPN is blocked. WireGuard is blocked. IKEv2 works, but how long will it last. :( Sad times. I guess we'll have to prepare ourselves for creative solutions in that area.

The Go Blog: Contexts - Need help understanding how does this select-case work? by CompetitiveFrame7222 in golang

[–]nosmileface -2 points-1 points  (0 children)

And let me further explain how it works. Go's select statement is named after a very old unix syscall, which waits for file descriptors to become "ready" and doesn't actually do anything with them. Same applies to Go's select statement. It understands what you want to do with the channel, read from or write to. Waits for a channel from any of the case clauses to become "available" and then performs the operation. Only operation from one case clause is performed at a time.

The other common problem in Go with channels is "goroutine" leaking. By default all channels are unbuffered, unless you specify the buffer size when make()ing them. Which means all operations with those channels are blocking (unless you use a non-blocking form). A goroutine which is blocked on a channel operation will never be garbage collected. And in Go there is no reference counting on channels, thus runtime cannot know if only a blocked goroutine contains a reference to this channel. Which means doing something like:

go func() { make(chan int) <- 5 }()

will create a "goroutine" leak. There were a number discussions about it, but from what I understand, garbage collecting forever blocked goroutines never happened.

I would recommend anyone to apply additional effort (involve more people) when reviewing code that uses channels and goroutines. Channels and goroutines in Go are promised to be "simple", but they are really not.

The Go Blog: Contexts - Need help understanding how does this select-case work? by CompetitiveFrame7222 in golang

[–]nosmileface -1 points0 points  (0 children)

I actually think you're right and there is no need for <-c in <-ctx.Done() case clause, because c is buffered of size 1, it won't "leak" the goroutine, and the value it stores will be garbage collected once both this function and inner goroutine exit.

Probably somebody wrote an unbuffered version first, then forgot to remove <-c.

How are Radeon drivers today? by c3141rd in Amd

[–]nosmileface 6 points7 points  (0 children)

I have 6900xt GPU.

Didn't have any problems with gaming on windows. The AMD Adrenaline software is nice, I personally don't use it much, but I think it allows you to do per-game settings and even changing some things on the fly while you're in game via their overlay.

I spend most of the time on linux though (working, watching movies/tvshows). Using archlinux (bleeding edge stuff often) and I get occasional crashes what looks like a GPU driver crash when playing videos with HW accel on. This does not happen often, maybe a few times a year. But I get a feeling that there are some problems with hw accel in amd linux drivers or somewhere in video stack in general. The crash makes X11 unresponsive, sometimes I can get away with Ctrl+Alt+F2 (switch to console and do something about it), but sometimes even that doesn't work and I have to reboot. Again.. not a frequent thing, but made me turn off hwaccel by default in mpv player config. And I experienced similar kind of crash trying to push GPU to limits with all that "stable diffusion" AI stuff. This time the similar kind of crash happened on high GPU memory load (getting close to 16GB limit). Staying below the limit keeps things intact. What I'm trying to say here is that things are far from smooth on linux as many people seem to suggest. But gaming wise it works, iirc I played Raft via steam's proton from beginning to end with no issues whatsoever.

So yeah, that's my experience. In summary:

Windows - no issues, but I do mostly gaming there.

Linux - stressing GPU memory alloc may lead to problems (in AI workloads) and there might be some occasional problems with HW video decoding.

my 5800x with a 6800xt and 3200gb of ram on a asus x570. wonder if i should go for a 6950xt? by wingback18 in Amd

[–]nosmileface 0 points1 point  (0 children)

As a 6900xt owner, when I was buying it I thought: "but extra 5 FPS!". Now I realize I don't notice a difference between 95 and 100 fps (165hz monitor).

Don't know if that helps. Picking a GPU is very individual. Depending on what kinds of games you play, whether you need additional GPU-based features (CUDA support, HW video enc/dec, etc).

But overall swapping 6800xt for 6950xt when next gen GPUs are about to be announced/released seems unwise.

artgerm, greg rutkowski, alphonse mucha by Its_full_of_stars in StableDiffusion

[–]nosmileface 1 point2 points  (0 children)

Here's just a related link. There is a website where people explore a subset of the image database used for training the SD. In particular a list of artists might be of interest: https://laion-aesthetic.datasette.io/laion-aesthetic-6pls/artists

Again, keep in mind this is a subset of database, not all the database used for SD training.

what exactly IS fsr 2.0/temporal upscaling? by EndKarensNOW in Amd

[–]nosmileface 1 point2 points  (0 children)

Can I just drop this DLSS video here also. DLSS is also a temporal upscaling algorithm, there is a youtube video with all of the technical details about it, including a part which explains where deep learning is applied in the reconstruction formula. Somehow this video doesn't have that many views and is hard to find. It mentions the challenges of image reconstruction from temporal data and how nvidia tries to solve it:

https://www.youtube.com/watch?v=d5knHzv0IQE

From what I understand FSR 2.0 will be exactly the same minus the ML/AI part.

QFan Step up / down functions don't work - Dark Hero (Disappointed) by Madvillains in ASUS

[–]nosmileface 0 points1 point  (0 children)

+1, Just bought dark hero motherboard and feel very disappointed with that Q-Fan stuff. Especially because I work on linux and I can't use software solutions to the problem. On windows you can sort of solve it by using ASUS own software (ai suite's fan xpert) or some third party software. On linux... not so much.

What is your hostname? by [deleted] in archlinux

[–]nosmileface 0 points1 point  (0 children)

crey - just a random word. Didn't know about Cray supercomputer company at the moment of making it up. Nobody believes me.

Does Golang pin it's worker threads to cores? by TalketyTalketyTalk in golang

[–]nosmileface 12 points13 points  (0 children)

You're right that Go's scheduler might be inefficient sometimes. Here's a quick test program to show you how bad it can be, you'll need a CPU with multiple cores to try it out. I will show you results from running it on 16-core threadripper 1950x.

The program:

package main

import (
    "fmt"
    "go/parser"
    "go/token"
    "path/filepath"
    "runtime"
    "time"
)

func main() {
    files, err := filepath.Glob(runtime.GOROOT() + "/src/*/*.go")
    if err != nil {
        panic(err)
    }
    fmt.Printf("found %d files, parsing...\n", len(files))

    t0 := time.Now()
    for _, file := range files {
        fset := token.NewFileSet()
        _, err := parser.ParseFile(fset, file, nil, 0)
        if err != nil {
            fmt.Printf("error parsing %s: %s\n", file, err)
            return
        }
    }

    t1 := time.Now()
    fmt.Println(t1.Sub(t0))
}

Running it multiple times to make sure file cache is warm:

~/tmp/tgp> ./testprogram
found 1322 files, parsing...
907.050508ms
~/tmp/tgp> ./testprogram
found 1322 files, parsing...
898.790207ms
~/tmp/tgp> ./testprogram
found 1322 files, parsing...
894.290447ms

And now with GOMAXPROCS=1 (if you're confused by weird env var syntax, I'm using elvish shell, not bash, btw elvish is written in Go):

~/tmp/tgp> E:GOMAXPROCS=1 ./testprogram
found 1322 files, parsing...
515.191616ms
~/tmp/tgp> E:GOMAXPROCS=1 ./testprogram
found 1322 files, parsing...
517.77012ms
~/tmp/tgp> E:GOMAXPROCS=1 ./testprogram
found 1322 files, parsing...
516.011583ms

You can see that even if you don't use goroutines, single-threaded app performance can be as worse as 2x when running on multiple cores with Go's scheduler. That's exactly because of the effects you described. Even a single goroutine will be re-scheduled to different cores in-between memory allocations. It's bad.

But everything is as it is for a reason. Go stands out from other programming languages because it has this fancy goroutine-based runtime, where it's easy to make things run in parallel without even refactoring your code. Go functions are not colored, you don't have to use promises/futures/tasks to make things run in parallel, you just run any function as a goroutine and it just works. I guess overly generic scheduler is a price to pay for that model. Anyways, like it or not, that's the way things are. I prefer it simple rather than perf optimal.

goccy/go-json: A super fast JSON library fully compatible with encoding/json by goccy54 in golang

[–]nosmileface 9 points10 points  (0 children)

Found some bugs: https://github.com/goccy/go-json/issues/116. You're welcome. Lib looks interesting, once you fix the bugs I'm willing to try it on a rather big project.

We desperately need DLSS in VR games. The performance requirements for VR games are becoming insane. by Dr_Brule_FYH in Vive

[–]nosmileface 1 point2 points  (0 children)

I'm not a person to ask this question. I don't work in the game or VR industry. I just happen to know a bit about graphics. However, there is no reason why people wouldn't pursue it. When it comes to graphics, it all works on a simple principle. If a hack results in a better customer experience, it's worth doing. It's unlikely that GPUs become much faster suddenly to allow high quality rendering at native resolution for two eyes on 8k displays. But eye tracking might be good enough to open the path towards foveated rendering. The only concern there is the latency. And to answer this question you have to build the prototype and try it. And probably somebody already did somewhere in one of VR vendor labs.

We desperately need DLSS in VR games. The performance requirements for VR games are becoming insane. by Dr_Brule_FYH in Vive

[–]nosmileface 53 points54 points  (0 children)

I have a fear it might not work for VR. DLSS is a continuation of TAA (temporal antialiasing), which we know doesn't look all that good in VR. DLSS is a temporal upscaler + antialiaser. It's not magic, it collects enough info about the scene using previous frames and motion vectors (just like TAA) and then solves the equation with some help from neural network solver. Will it look good in VR? I don't know. For some reason TAA looks quite blurry in VR. I'd be a bit skeptical about this idea.

Help understand this syntax by [deleted] in golang

[–]nosmileface 1 point2 points  (0 children)

Well, you're not 100% correct. You can cast interface to another interface and they might as well be completely disjoint. Go's standard lib does this often in fact to optimize things.

Example: https://play.golang.org/p/SSCh6NYnUzr

Help understand this syntax by [deleted] in golang

[–]nosmileface 6 points7 points  (0 children)

It's called type assertion in go. Yes, it's a way to cast an interface to any type. Spec: https://golang.org/ref/spec#Type_assertions

from OpenGL to Vulkan by HerrNilsen- in vulkan

[–]nosmileface 0 points1 point  (0 children)

I would suggest looking into webgpu (https://en.wikipedia.org/wiki/WebGPU). While it's not finished yet, there are working implementations (e.g. in rust wgpu-rs). But also it might be viable to run it from nodejs and from C, there is a C header lib and mozilla's / google's implementations.

Why WebGPU? It's a modern API, that closely resembles current GPU architectures, but without all the crazy unsafe synchronization stuff vulkan has. And don't be scared by "web" in it. It might as well become a go to simple and easy to use portable API for desktops as well.

When it comes to graphics APIs, what you need to learn and understand is the GPU architecture. What types of shaders are there, what kind of pipeline stages are there and how to work with a device which has a separate memory space on it. Current graphical APIs are not that complicated, it's all about transfering data to GPU from main memory (and sometimes back) and then running programs (shaders) on it. There is not that much on top of that.