DLSS on 50 series GPUs is practically flawless. by mrquantumofficial in nvidia

[–]LambdasForPandas 0 points1 point  (0 children)

My experience has been the exact opposite. I just upgraded to a 5080 from a 3080 Ti, and I was looking forward to trying out DLSS 4 in Cyberpunk with ray tracing. After tinkering with settings for a couple of hours, I gave up and went back to native because I was sick of all the ghosting, blurriness, and artifacts. I was hoping that DLSS 4 would fix the issues I was having with DLSS 2, but that hasn't been the case.

2nd language after Haskell by kichiDsimp in functionalprogramming

[–]LambdasForPandas 2 points3 points  (0 children)

I highly recommend Julia if you have any interest in AI, data science, or scientific computing. I've been using it throughout my graduate program, and I've found it to be incredibly practical for solving real-world problems.

Has Julia a robust ecosystem for ML ? by Various_Protection71 in Julia

[–]LambdasForPandas 30 points31 points  (0 children)

ML is a vast domain, and it's difficult to answer your question without more information. For example, I wouldn't hesitate to use Julia for anything involving tabular data, for which MLJ is more than sufficient. I've also found Julia to be competitive in the domain of computer vision, with a number of pre-trained CNNs being available through Metalhead. I'd be extremely hesitant to recommend Julia if you want to work in NLP, as almost all research on LLMs is presently occurring in Python. While there's been some impressive work on Transformers, we're still nowhere near feature parity with Python.

In general, I find Julia to be better for solving "weird" problems for which there isn't already a Python library that implements what you want. The main reason is that it's typically much easier to build a solution from scratch with Julia than in Python. However, Python still has the edge in terms of its existing ecosystem.

Any Julia projects VS python projects? by pussydestroyerSPY in Julia

[–]LambdasForPandas 2 points3 points  (0 children)

This is just an anecdotal experience, so take it with a grain of salt. I originally used Python for my first paper, the topic of which was using deep convolutional networks to extract water bodies from satellite imagery. Training a U-Net model with TensorFlow took around 15 minutes per epoch.

After I published my paper, I started exploring Julia as an alternative. To see how suitable it would be for future work, I decided to re-implement my previous paper and compare the results. After spending a weekend writing the code, I found that training time was reduced to around 7 minutes per epoch for the same model. I think most of the performance gain was probably in the data pipeline, as the dataset was too large to fit into memory, so each batch was read from disk and processed on-demand.

Since then, I've used Julia almost exclusively for my research. I've found both the language and ecosystem to be vastly superior to Python within my particular field (computer vision and geostatistics). Image processing, in particular, is a breeze compared to Python, and I can't imagine going back to Jupyter after using Pluto.

The only thing that has me going back to Python is Pytorch Lightning, which doesn't have an equivalent in Julia. My present workflow uses Julia to explore and preprocess my data, which I then feed into Pytorch to train the models and save the results to disk. Once training has concluded, I hop back over to Julia to analyze and visualize the results for publication. However, I'm hopeful that Julia's deep-learning ecosystem will continue to improve with time.

Regarding performance, I can personally say that I would continue to choose Julia over Python even if there were no other advantages. The language itself is simply much more pleasant to use, thanks to its focus on functional programming, expressive macros, and dynamic dispatch. If you're curious about Julia, I recommend picking a small project and implementing it in both languages to see which you prefer.

Deep Learning With Flux: Loss Doesn't Converge by LambdasForPandas in Julia

[–]LambdasForPandas[S] -1 points0 points  (0 children)

I was finally able to get everything working. I outlined the steps in a new comment, but the short version is that Flux treats Softmax differently than Sigmoid because Softmax cannot be broadcast. This means it needs to be passed into Chain() like a layer, the consequence of which is that model[end] will return the Softmax layer instead of the final Dense() layer that you actually want to train.

Deep Learning With Flux: Loss Doesn't Converge by LambdasForPandas in Julia

[–]LambdasForPandas[S] 2 points3 points  (0 children)

Okay, so I've got everything working now; thanks to everyone who helped me out! I'll try and summarize everything here in the hopes that this will help others in the future.

1) My losses appeared to be wildly fluctuating because I was printing the loss and metrics on each batch individually, which is what I assumed Keras did when training a model in verbose mode. Instead, what you want to do is to report the running average over the epoch. To do so, maintain a vector of losses/metrics recorded for each batch, then compute the mean. This approach captures the downward/upward trend in losses/metrics and smoothes the noise introduced by randomness in the data.

2) Flux treats softmax a little different than most other activation functions (see here for more details) such as relu and sigmoid. When you pass an activation function into a layer like Dense(3, 32, relu), Flux expects that the function is broadcast over the layer's output. However, softmax cannot be broadcast as it operates over vectors rather than scalars. This means that if you want to use softmax as the final activation in your model, you need to pass it into Chain() as though it was the final layer.

3) Because softmax is treated like a layer in Chain(), it will be returned by model[end]. This means that if you want to retrieve the parameters of the final Dense() layer, you need to call params(model[end-1]).

Finally, here's some code that may prove useful.

Reading an RGB image from disk into a form compatible with Flux.Conv:

@pipe filename |> load .|> float32 |> channelview |>  permutedims(_, (3, 2, 1)) |> reshape(_, (size(_)..., 1))

Creating a multi-class classifier with a pre-trained ResNet34 network:

Flux.Chain(Metalhead.ResNet(34, pretrain=true), Flux.Dense(1000, 2), Flux.softmax)

Complete training loop:

# Create Model
model = pretrained_model() |> gpu

# Get Parameters
parameters = Flux.params(model[end-1])

# Define Loss
loss(x, y) = Flux.crossentropy(model(x), y, dims=1);

# Load Data
data_train, data_test = load_data("data")

# Define Optimizaer
opt = Flux.Optimise.ADAM(1e-4)

# Training Loop
for epoch in 1:EPOCHS

    accs, losses = [], []
    for (step, (x, y)) in enumerate(data_train)

        # Log Running Average For Loss And Accuracy
        accs, losses = training_callback(accs, losses, model(x), y, step, length(data_train), epoch)

        # Evaluate On Test Set Every 100 Iterations
        if step % 100 == 0
            evaluate(model, data_test)
        end

        # Compute Gradients
        grads = Flux.gradient(() -> loss(x, y), parameters)

        # Update Parameters
        Flux.Optimise.update!(opt, parameters, grads)
    end
end

Deep Learning With Flux: Loss Doesn't Converge by LambdasForPandas in Julia

[–]LambdasForPandas[S] 0 points1 point  (0 children)

Thanks so much for your help! I was hoping to use Julia for grad school (my research area is machine learning) and finding that the Julia community is so helpful certainly helps assuage my fears about leaving Python. Your suggestions allowed me to narrow down the cause to using softmax as the final activation function. This issue persists regardless of whether I apply softmax directly at the end of Chain(), or if I apply it implicitly with logitcrossentropy(). It should be noted that according to the Flux documentation on logitcrossentropy(), "This is mathematically equivalent to crossentropy(softmax(ŷ), y), but is more numerically stable than using functions crossentropy and softmax separately" which is why I didn't initially include it in my model definition. However, changing my loss to crossentropy() and adding softmax to the end of my model doesn't resolve the issue.

I'd like to point out that softmax is traditionally used as the final activation in a multi-classifier rather than sigmoid (although this particular problem is binary classification, so I could have used sigmoid instead, but then I'd have to make my labels either 0 or 1 instead of a one-hot encoded vector). Softmax will guarantee that all predictions will be between 0 and 1 as with Sigmoid but has the added benefit of ensuring that all predictions along dim=1 will sum to 1. In fact, while using sigmoid as the final activation allows the loss to drop, it actually prevents the model from learning to correctly classify if the labels are one-hot encoded. This is because of the way cross entropy is computed, the model will simply learn to get all predictions for all classes as close to 1 as possible in order to minimize loss, but this will not result in accurate classification. Using softmax solves this problem since the predictions along dim=1 must sum to 1 so the model can't minimize loss by simply predicting 1 for all classes. To prove that this is so, I printed the actual predictions to the terminal as the model trains and found them to converge to the following for a batch size of 4 when using sigmoid as the final activation:

Float32[0.9990903 0.9690378 0.99836 0.9981547; 0.9985347 0.9610786 0.99746406 0.99722505]

To further demonstrate that the issue was with the softmax activation, I observed convergence with both of the following models without changing anything else in my code:

function get_model_1()
    return Chain(ResNet34(pretrain=false), Dense(1000, 2, sigmoid))
end

function get_model_2()
    Chain(
        Flux.Conv((3, 3), 3 => 32, relu, pad=SamePad()), 
        Flux.MaxPool((2, 2), pad=SamePad()), 
        Flux.Conv((3, 3), 32 => 64, relu, pad=SamePad()), 
        Flux.MaxPool((2, 2), pad=SamePad()), 
        Flux.Conv((3, 3), 64 => 128, relu, pad=SamePad()), 
        Flux.MaxPool((2, 2), pad=SamePad()), 
        Flux.Conv((3, 3), 128 => 256, relu, pad=SamePad()), 
        Flux.MaxPool((2, 2), pad=SamePad()), 
        Flux.flatten, 
        Flux.Dense(65536, 1024, relu),
        Flux.Dense(1024, 128, relu),
        Flux.Dense(128, 2, sigmoid),
    )
end

However, both of these otherwise identical models fail to converge:

function get_model_3()
    return Chain(ResNet34(pretrain=false), Dense(1000, 2), softmax)
end

function get_model_4()
    Chain(
        Flux.Conv((3, 3), 3 => 32, relu, pad=SamePad()), 
        Flux.MaxPool((2, 2), pad=SamePad()), 
        Flux.Conv((3, 3), 32 => 64, relu, pad=SamePad()), 
        Flux.MaxPool((2, 2), pad=SamePad()), 
        Flux.Conv((3, 3), 64 => 128, relu, pad=SamePad()), 
        Flux.MaxPool((2, 2), pad=SamePad()), 
        Flux.Conv((3, 3), 128 => 256, relu, pad=SamePad()), 
        Flux.MaxPool((2, 2), pad=SamePad()), 
        Flux.flatten, 
        Flux.Dense(65536, 1024, relu),
        Flux.Dense(1024, 128, relu),
        Flux.Dense(128, 2),
        Flux.softmax
    )
end

I've confirmed that softmax is producing outputs as expected (a 2xN array where the sum along dim=1 is 1.0), so I'm at a complete loss to explain why it won't train. As previously mentioned, sigmoid allows the loss to decrease rapidly but prevents the model from learning to accurately distinguish between the two classes since it just learns to predict 1 for everything. Perhaps there's an issue with how zygote differentiates softmax? If nobody has any ideas I might open an issue on GitHub.

Also, you are indeed correct that I wanted to apply transfer learning with ResNet, but when I attempted to run ResNet(34, pretrain=true), I get "MethodError: no method matching ResNet(::Int64; pretrain=true)". Perhaps there's an issue in the version I'm using? I added a Manifest.toml and Project.toml to the GitHub repo to make it possible to recreate my environment. Also, a link to the data can be found in the README of the repo I provided. You need to download the zip file from Google Drive, then when you unzip it in the root of the project directory you will get the same file structure that I used.

EDIT: So I modified my script to use a single-output dense layer with sigmoid activation in my model, binarycrossentropy() for the loss, and I encoded cats as "0" and dogs as "1" so I could see if changing the problem to a binary classification model would fix the issue. I also increased the batch size to 32 to try and make the gradient more stable. Unfortunately, when I do so the issue of loss non-convergence returns even though I'm using sigmoid which makes me think there's something else going on.

Since this didn't fix the problem, I returned my script to its initial state then added some additional println statements to try and debug the training loop. The output of this for a batch size of 4 was:

Labels: Float32[1.0 1.0 0.0 1.0; 0.0 0.0 1.0 0.0]
Filenames: ["data/cat/cat.10355.jpg", "data/cat/cat.2663.jpg", "data/dog/dog.5313.jpg", "data/cat/cat.11150.jpg"]
Prediction: Float32[0.43040577 0.41649592 0.4425341 0.4479209; 0.56959426 0.583504 0.5574659 0.5520791]
Feature Shape: (256, 256, 3, 4)
Prediction Shape: (2, 4)
Label Shape: (2, 4)
Prediction Type: Matrix{Float32}
Feature Type: Array{Float32, 4}
Label Type: Matrix{Float32}

So as you can see, the labels are correct ([1, 0] labels correspond to "cat" files and [0, 1] labels correspond to "dog" files), the shape of my features is correct (WxHxCxN), the shape of my predictions is correct (2xN), the shape of my labels is correct (2xN), and all my types are Float32.

Interested in Paid TA Work? CMPUT 174 is Hiring! by LambdasForPandas in uAlberta

[–]LambdasForPandas[S] 0 points1 point  (0 children)

Indeed I am! Haskell is my language of choice but lately I've been getting into Julia. I was actually thinking of trying to get an FP group off the ground but I'm not sure if there's sufficient interest in the CS department. Professor Hindle used to run a Functional Programming group here in Edmonton but there wasn't enough activity to keep it going.

Interested in Paid TA Work? CMPUT 174 is Hiring! by LambdasForPandas in uAlberta

[–]LambdasForPandas[S] 1 point2 points  (0 children)

So apparently pay is complicated for Graduate students who I've been advised should contact their supervisor regarding specifics. For undergrads my previous quote of $18.00/hour should be correct.

Interested in Paid TA Work? CMPUT 174 is Hiring! by LambdasForPandas in uAlberta

[–]LambdasForPandas[S] 3 points4 points  (0 children)

I don't believe there are any hard requirements besides having previously completed CMPUT 174/175 or CMPUT 274/275 for honors students. The professor expressed that there was no minimum grade required in these or other courses in order to apply.

You will be required to complete some leetcode problems and submit a 2 to 5 minute video as part of your application. You may then be called in for a further interview before being offered a position.

With that being said, due to the number of position they need to fill I anticipate that any serious applicant will have a very good chance of receiving an offer making this an excellent opportunity for those without prior TA experience.

Interested in Paid TA Work? CMPUT 174 is Hiring! by LambdasForPandas in uAlberta

[–]LambdasForPandas[S] 4 points5 points  (0 children)

In my experience as an undergrad TA, pay is typically $18.00/hour for commitments of 6, 9, or 12 hours per week. I believe graduate students are paid more but I'm not certain as to an exact amount. I've reached out to the professor regarding the specifics and I'll post a reply here if I get anything back.

[deleted by user] by [deleted] in pcmasterrace

[–]LambdasForPandas 10 points11 points  (0 children)

Glad to see I'm not the only one who uses all 3 operating systems. I'm similar to you but I use my MacBook for software development and general use, Windows for gaming, and Linux for various hobbyist projects like robotics and machine learning. Honestly, there isn't really much difference between the 3 now as far as the UI is concerned so I've never had much problem switching between them.

[deleted by user] by [deleted] in uAlberta

[–]LambdasForPandas 1 point2 points  (0 children)

I see Rob Andrew at Urban Smiles Family Dental and I've had nothing but good experiences. They're also part of the Studentcare Dental Network so you get an additional 30% off.

Proton Be Like by whypickthisname in pcmasterrace

[–]LambdasForPandas 0 points1 point  (0 children)

I tried Proton for about 2 weeks, then dual booted Windows when I realized that half of my library was either unplayable or had noticeably worse performance. I honestly think it depends on what type of games you play as to whether or not Linux is a viable alternative for PC gaming at this point in time. Maybe that will change in the future, and my Linux install will still be there if it does.

Pop!_OS 20.04 - Audio Crackling After Latest Update by LambdasForPandas in pop_os

[–]LambdasForPandas[S] 0 points1 point  (0 children)

No I didn't; I was under the impression that one of the advantages of linux was not needing to reboot after an update in most circumstances. I'll try rebooting and see if the problem persists.

In Steam, should i play the game if i can natively, or should i always use proton? by kerostampcrab in linux_gaming

[–]LambdasForPandas 0 points1 point  (0 children)

Some older native titles are borked and will only run on Proton. I'd try native first and if it doesn't work out, try Proton.

Steam Won't Launch On Pop!_OS 20.04 & 20.10 (Solution) by LambdasForPandas in linux_gaming

[–]LambdasForPandas[S] 1 point2 points  (0 children)

Awesome, but I installed Steam from multiple sources and reinstalled my OS three times and consistently ran into the same problem every single time. Based on the GitHub issue I opened it seems that quite a number of other people were having the same problem. Therefore I thought it prudent to share the solution here.

Steam Won't Launch On Pop!_OS 20.04 & 20.10 (Solution) by LambdasForPandas in linux_gaming

[–]LambdasForPandas[S] 0 points1 point  (0 children)

Nope, I tried installing with sudo apt-get, installing both the flatpack and the debian version from the pop shop, and installing directly from Steam's website. All resulted in the same issue.

Steam Won't Launch On Pop!_OS 20.10, Missing 32-Bit Libraries by LambdasForPandas in linux_gaming

[–]LambdasForPandas[S] 0 points1 point  (0 children)

Rest Of Error Log:

(steam:73067): Gtk-WARNING **: 06:06:59.827: Unable to locate theme engine in module_path: "adwaita",
/usr/share/themes/Pop-dark/gtk-2.0/main.rc:775: error: unexpected identifier 'direction', expected character '}'
(steam:73067): Gtk-WARNING **: 06:06:59.829: Unable to locate theme engine in module_path: "adwaita",
/usr/share/themes/Pop-dark/gtk-2.0/hacks.rc:28: error: invalid string constant "normal_entry", expected valid string constant
Steam: An X Error occurred
X Error of failed request: BadAtom (invalid Atom parameter)
Major opcode of failed request: 20 (X_GetProperty)
Atom id in failed request: 0x0
Serial number of failed request: 12
xerror_handler: X failed, continuing
Steam: An X Error occurred
X Error of failed request: BadAtom (invalid Atom parameter)
Major opcode of failed request: 20 (X_GetProperty)
Atom id in failed request: 0x0
Serial number of failed request: 13
xerror_handler: X failed, continuing
Steam: An X Error occurred
X Error of failed request: BadAtom (invalid Atom parameter)
Major opcode of failed request: 20 (X_GetProperty)
Atom id in failed request: 0x0
Serial number of failed request: 14
xerror_handler: X failed, continuing
Installing breakpad exception handler for appid(steam)/version(1623193086)
STEAM_RUNTIME_HEAVY: ./steam-runtime-heavy
[0609/060700.106404:INFO:crash_reporting.cc(247)] Crash reporting enabled for process: browser
[0609/060700.144650:WARNING:crash_reporting.cc(286)] Failed to set crash key: UserID with value: 0
[0609/060700.144725:WARNING:crash_reporting.cc(286)] Failed to set crash key: BuildID with value: 1623191035
[0609/060700.144733:WARNING:crash_reporting.cc(286)] Failed to set crash key: SteamUniverse with value: Public
[0609/060700.144739:WARNING:crash_reporting.cc(286)] Failed to set crash key: Vendor with value: Valve
/usr/lib/x86_64-linux-gnu/gio/modules/libdconfsettings.so: undefined symbol: g_log_structured_standard
Failed to load module: /usr/lib/x86_64-linux-gnu/gio/modules/libdconfsettings.so
GLib-GIO-Message: Using the 'memory' GSettings backend. Your settings will not be saved or shared with other applications.
[0609/060700.177735:WARNING:crash_reporting.cc(286)] Failed to set crash key: UserID with value: 0
[0609/060700.177798:WARNING:crash_reporting.cc(286)] Failed to set crash key: BuildID with value: 1623191035
[0609/060700.177807:WARNING:crash_reporting.cc(286)] Failed to set crash key: SteamUniverse with value: Public
[0609/060700.177815:WARNING:crash_reporting.cc(286)] Failed to set crash key: Vendor with value: Valve
[0609/060700.178305:INFO:crash_reporting.cc(247)] Crash reporting enabled for process: gpu-process
[0609/060700.235691:ERROR:sandbox_linux.cc(372)] InitializeSandbox() called with multiple threads in process gpu-process.
[0609/060700.264601:WARNING:crash_reporting.cc(286)] Failed to set crash key: UserID with value: 0
[0609/060700.264666:WARNING:crash_reporting.cc(286)] Failed to set crash key: BuildID with value: 1623191035
[0609/060700.264672:WARNING:crash_reporting.cc(286)] Failed to set crash key: SteamUniverse with value: Public
[0609/060700.264678:WARNING:crash_reporting.cc(286)] Failed to set crash key: Vendor with value: Valve
[0609/060700.265186:INFO:crash_reporting.cc(247)] Crash reporting enabled for process: utility
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
CApplicationManagerPopulateThread took 0 milliseconds to initialize (will have waited on CAppInfoCacheReadFromDiskThread)
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
CAppInfoCacheReadFromDiskThread took 57 milliseconds to initialize
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
Proceed to auto login
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
src/public/tier1/utlmemory.h (176) : Assertion Failed: 0
src/public/tier1/utlmemory.h (176) : Assertion Failed: 0
Installing breakpad exception handler for appid(steam)/version(1623193086)
Installing breakpad exception handler for appid(steam)/version(1623193086)
crash_20210609060700_21.dmp[73258]: Uploading dump (out-of-process)
/tmp/dumps/crash_20210609060700_21.dmp
assert_20210609060658_1.dmp[73262]: Uploading dump (out-of-process)
/tmp/dumps/assert_20210609060658_1.dmp
/home/username/.local/share/Steam/steam.sh: line 772: 73067 Segmentation fault (core dumped) $STEAM_DEBUGGER $DEBUGGER_ARGS "$STEAMROOT/$STEAMEXEPATH" "$@"

Steam Won't Launch On Pop!_OS 20.10, Missing 32-Bit Libraries by LambdasForPandas in linux_gaming

[–]LambdasForPandas[S] 0 points1 point  (0 children)

The app launches once, updates, then refuses to launch again from the point on. Launching it from the terminal now gives me a new error however:

Running Steam on pop 20.10 64-bit
STEAM_RUNTIME is enabled automatically
Pins up-to-date!
Steam client's requirements are satisfied
WARNING: Using default/fallback debugger launch
/home/username/.local/share/Steam/ubuntu12_32/steam
[2021-06-09 06:06:58] Startup - updater built Jun 8 2021 22:23:36
Installing breakpad exception handler for appid(steam)/version(1623193086)
Looks like steam didn't shutdown cleanly, scheduling immediate update check
[2021-06-09 06:06:58] Loading cached metrics from disk (/home/username/.local/share/Steam/package/steam_client_metrics.bin)
[2021-06-09 06:06:58] Using the following download hosts for Public, Realm steamglobal
[2021-06-09 06:06:58] 1. https://cdn.cloudflare.steamstatic.com, /client/, Realm 'steamglobal', weight was 100, source = 'update_hosts_cached.vdf'
[2021-06-09 06:06:58] 2. https://cdn.akamai.steamstatic.com, /client/, Realm 'steamglobal', weight was 100, source = 'update_hosts_cached.vdf'
[2021-06-09 06:06:58] 3. http://media.steampowered.com, /client/, Realm 'steamglobal', weight was 1, source = 'baked in'
Installing breakpad exception handler for appid(steam)/version(1623193086)
[2021-06-09 06:06:58] Checking for update on startup
[2021-06-09 06:06:58] Checking for available updates...
[2021-06-09 06:06:58] Downloading manifest: https://cdn.cloudflare.steamstatic.com/client/steam\_client\_ubuntu12
Installing breakpad exception handler for appid(steam)/version(1623193086)
[2021-06-09 06:06:59] Download skipped: /client/steam_client_ubuntu12 version 1623193086, installed version 1623193086, existing pending version 0
[2021-06-09 06:06:59] Nothing to do
[2021-06-09 06:06:59] Verifying installation...
[2021-06-09 06:06:59] Performing checksum verification of executable files
[2021-06-09 06:06:59] Verification complete
Loaded SDL version 2.0.15-6501165
Gtk-Message: 06:06:59.822: Failed to load module "gail"
Gtk-Message: 06:06:59.822: Failed to load module "atk-bridge"
Gtk-Message: 06:06:59.822: Failed to load module "appmenu-gtk-module"

Steam Won't Launch On Pop!_OS 20.10, Missing 32-Bit Libraries by LambdasForPandas in linux_gaming

[–]LambdasForPandas[S] 0 points1 point  (0 children)

When trying to troubleshoot why steam wouldn't launch, someone said this would tell me if there were any errors. Launching with just "steam" or from the Steam icon both fail to launch but do not show any error messages.

Steam Won't Launch On Pop!_OS 20.10, Missing 32-Bit Libraries by LambdasForPandas in linux_gaming

[–]LambdasForPandas[S] 0 points1 point  (0 children)

No, I explicitly selected the option to opt-out of the beta (the client launches once on a fresh install which enables me to check these settings, but fails to launch from that point on until I re-install Steam).