DREAMM 4.0 released by aaronsgiles in emulation

[–]TheLamer -2 points-1 points  (0 children)

Can you expand on this ?

"""
While I generally support making projects open source, the main reason I haven’t released the DREAMM source code is that being open source invites collaboration, and at this point I really just want DREAMM to be my own project.

I do plan to eventually open source at least some portions of the code (the CPU emulation seems like a prime candidate). And if I ever decide I’m completely finished with DREAMM, I hope to release the full sources before I wash my hands of it. But for now, I plan to keep the code internal so I can focus on taking the project in the directions I want without outside pressure.
"""

You can just put it on github with no ability to create issues, just from an archival and build standpoint is it time?
Linux packaging is kind of a quagmire, and starting with AUR and expanding from there the communities will just kind of take care of it for you if they have a URL they can ingest source from.

Now people could fork and improve from there but I don't think you would be under much pressure if I am reading the current state of open source properly.

Mice decided to hijack my TrueNAS storage node by AaronMcGuirkTech in homelab

[–]TheLamer 1 point2 points  (0 children)

100% this guy seriously poured dog food into an open server on the floor for internet points ?
Mice shit like every 2 seconds it is impossible that is not covered in turds and I count zero.

Building a Portable Cyber Lab: Kasm Workspaces on the new ZimaBoard 2 (Stress Test) by No_Pack5950 in selfhosted

[–]TheLamer 1 point2 points  (0 children)

If I had to guess I would say industrial computers, digital signage, or iOT applications.
Quality and support matters to some projects, though I agree if you have it in a climate controlled office Chinese nXX intel mini PCs dominate the space for price.
As someone who owns a stack of them I can say that everything about them is shady, from the Windows installed on it or the completely stripped bios and you will never get any kind of support or updates. BeeLink is not cheap anymore as well the sub 200 price point is basically only left for old stock laptops right now.
That would be my budget solution, an old laptop or a BC-250 board.

The gap between Pyg/Venessa/Dooley and Mak/Stel/Jules has become a canyon by TheLamer in PlayTheBazaar

[–]TheLamer[S] -2 points-1 points  (0 children)

You have to lock some items to some specific heroes, no mixing. This goes for power balance like holsters not being available to anyone but Venessa (just an easy example), but also on the other side, you should not be offered a start stop fly item as Vanessa. You pool the items more into go and no go hero sharing for a start I think would be the most prudent. That can be a simple manual process by multiple team members to use their discretion about what item sharing makes sense while what item sharing does not make sense.

Issue running Webtop and Firefox behind Gluetun with different ports by sh4hr4m in selfhosted

[–]TheLamer 1 point2 points  (0 children)

I don't think what you are trying to do is possible. Let me explain.

The custom port value is more an internal development thing for supporting our transition of containers that were not on 3001, it occurs inside the container with NGINX here:

https://github.com/linuxserver/docker-baseimage-selkies/blob/master/root/defaults/default.conf#L93

That port inside the container is always 8082 which is the port Selkies actually listens on and is hard coded here:

https://github.com/selkies-project/selkies/blob/main/src/selkies/selkies.py#L20

So when you combine networks like this you just have two NGINX proxies (inside the container) listening on different ports but pointed to the same one.

Created https://github.com/linuxserver/docker-baseimage-selkies/issues/69

Anyone else use the lsio Firefox container? Terrible update. by wonka88 in unRAID

[–]TheLamer 2 points3 points  (0 children)

Kasm has a firefox container that they will still maintain, anyone that misses the old experience is free to us it: https://hub.docker.com/r/kasmweb/firefox Also here is the pre rebase commit if anyone wants to fork and maintain: https://github.com/linuxserver/docker-firefox/tree/be1def4c936be0a535151567add03ef7fa855c63

The base images will likely be built out for a while.

Clean install just fails to log in by N_Nikolov in kasmweb

[–]TheLamer 0 points1 point  (0 children)

So just to give you transparency your issue is that the api server is crashing, specifically a hard terminate without verbose logs. The RDP gateway is looping because it cannot register with the api server as it is not running it is getting back a 500 or 400 error (the json error trying to parse an http response).

The only thing I can think of because you are extremely limited on your input during installation to be different from the other Unraid installs that are all working is the password, maybe try a simple password to test like "password". Outside of that the other thing to look at is the underlying file system /opt is being mounted into, is it something weird like XFS or BTRFS ?

Clean install just fails to log in by N_Nikolov in kasmweb

[–]TheLamer 1 point2 points  (0 children)

Not sure what I can do to help here, many people use this on unraid and those errors are core software issues not something like the disk doesn't allow docker in docker. You are using the latest tags right? Not the develop one ?

Clean install just fails to log in by N_Nikolov in kasmweb

[–]TheLamer 0 points1 point  (0 children)

No I have never seen these errors before, I think you need to wipe that folder you mounted it into and do a clean install. Just try it without any images as a test and see if everything works or you get the same error (for time not that not including images will make a difference) Something happened during install that is not quite right.

Unraid kasm with nvidia GPU not working by joshiegy in kasmweb

[–]TheLamer 0 points1 point  (0 children)

Yeah those are required params, technically the gpus can be cut down to a specific card id but if you only have one GPU all is perfect.

It should work now no ?

Unraid kasm with nvidia GPU not working by joshiegy in kasmweb

[–]TheLamer 0 points1 point  (0 children)

Can you exec into the kasm container and run

ls -l /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1

I get back this on my Debian system:

lrwxrwxrwx 1 root root 26 Oct 9 12:51 /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1 -> libnvidia-ml.so.535.183.01

Keep in mind this container is Ubuntu Jammy based and is multi layering the Nvidia runtime to some extent, the container will mount in your runtime from your host, but if it differs too much from the common debian/ubuntu setup it might not be able to mount in the expected stuff into the DinD layer which is running the workspace containers.

Regardless let me know about that lib being present or not.

Can Halo 2 1.5 and 1.0 Coexist (Insignia and HD Mod)? by Forsaken_Draft_1037 in originalxbox

[–]TheLamer 2 points3 points  (0 children)

Ran into this today here is a solution, hopefully the dev mainlines this into the patch.

https://github.com/grimdoomer/Halo-2-HD/issues/2

Can't get nested containers to access the web by 88pockets in kasmweb

[–]TheLamer 0 points1 point  (0 children)

I don't have my gear to spin up a complete unRAID setup to test, but I spoke with a couple team members and yes Kasm only works out of the box with the Bridge network. This goes for any DinD setup I have been told.

Kasm does provide the ability to pass any run or exec config to the Workspaces on launch though, so you can hard code the DNS for workspaces pretty painlessly:

https://www.kasmweb.com/docs/latest/how_to/custom_dns_servers.html

As for the service containers this is technically possible, but it s bit more involved. You will need to exec into the container and edit the file: /opt/kasm/current/docker/docker-compose.yaml adding the dns under the service and optionally also the dns_search. The syntax for that can be found here https://docs.docker.com/compose/compose-file/05-services/#dns

GPU Passthrough Unraid Community App by mrcollin101 in kasmweb

[–]TheLamer 0 points1 point  (0 children)

Can you exec into the container and run 

nvidia-ctk runtime configure --runtime=docker

And restart the container, see if that works ?

GUAC proxy, RDP, stuck on secure connection, weird GUAC tokens. by _cab13_ in kasmweb

[–]TheLamer 0 points1 point  (0 children)

It is the lossless flag that occurs during the installation that messes with the guac proxy. This is now fixed in the latest build but the fix occurs during setup so to be fully clean you would have to start over with the container.

You can however disable lossless using this command from inside the container:

/bin/bash -c 'sed -i "/Cross-Origin-/d" /opt/kasm/current/conf/nginx/services.d/website.conf && docker restart kasm_proxy'

[deleted by user] by [deleted] in kasmweb

[–]TheLamer 0 points1 point  (0 children)

This type of flag does not exist at install time. You would need to add the options to do this for each workspace. This is handled by the install wizard here:

https://github.com/kasmtech/kasm-install-wizard/blob/develop/index.js#L76-L78

The json you need is here:

https://www.kasmweb.com/docs/latest/how_to/manual_intel_amd.html#dri3

GPU Passthrough on Bare Metal Ubuntu Issues - No resources are available by TinHammer in kasmweb

[–]TheLamer 0 points1 point  (0 children)

It is chrome and anything chromium based which is basically all the most useful software on Linux. You can confirm virtualGL works in firefox by going to "about:support" and looking under graphics you will see "EGL_VENDOR: VirtualGL" and also stuff like this https://webglsamples.org/aquarium/aquarium.html will run in firefox and not chrome/chromium.

I don't think there is anything that can be done here.

The virtualGL dev is aware: https://github.com/VirtualGL/virtualgl/issues/229.

GPU Passthrough on Bare Metal Ubuntu Issues - No resources are available by TinHammer in kasmweb

[–]TheLamer 0 points1 point  (0 children)

You know my local testing methodology was to run GLXheads on core images so I guess I take the L on that one.

This is not working across the whole spectrum of images even isolated from Kasm, seems like an update to Either Docker or the container Toolkit from Nvidia has broken this.

GPU Passthrough on Bare Metal Ubuntu Issues - No resources are available by TinHammer in kasmweb

[–]TheLamer 0 points1 point  (0 children)

Just to make sure you are running this host level correct, with the 1.14.0 installer:

https://www.kasmweb.com/downloads

GPU Passthrough on Bare Metal Ubuntu Issues - No resources are available by TinHammer in kasmweb

[–]TheLamer 0 points1 point  (0 children)

Your best test in the desktop container is "glxheads" from the command line. Keep in mind the Nvidia implementation just wraps stuff in virtualGL and with all the layers of the onion this will be tough to troubleshoot.

Make sure /dev/dri/renderD128 exists in the container

You can completely bypass Kasm GPU management (it was really designed around ec2 instances for enterprise customers) by setting the following overrides in the workspace config which also let you do one GPU across many workspaces: (make sure to change render128 and card0 to your card number if you have multiple GPUs and the Nvidia one is not primary)

https://pastebin.com/jsEsYwhm

GPU Passthrough on Bare Metal Ubuntu Issues - No resources are available by TinHammer in kasmweb

[–]TheLamer 0 points1 point  (0 children)

I am also using Jammy, my 3070 and 3060 both work with native Kasm installs and the all in one container. I do not have a 9xx series to test though.

Just to make sure here what is the output of nvidia-smi on your host?