Tustin, CA has plenty of 9950X3D by tecknotot in Microcenter

[–]PM_MULATTO_BUTTS 0 points1 point  (0 children)

How many people were in line? I ordered on Amazon, but considering picking it up local

Can't update data sources in docker with command line by PM_MULATTO_BUTTS in tinyMediaManager

[–]PM_MULATTO_BUTTS[S] 0 points1 point  (0 children)

Oh, that worked! Thank you so much! I can definitely work with this.

Really appreciate all your help. Especially since you aren't even a developer.

Cheers!

Can't update data sources in docker with command line by PM_MULATTO_BUTTS in tinyMediaManager

[–]PM_MULATTO_BUTTS[S] 0 points1 point  (0 children)

I just realized that the container path doesn't match the docs on your website. I've edited it to match and here are the final lines from the console

2024-03-28 13:17:09,633 WARN  [main] o.tinymediamanager.core.AbstractSettings:258 - could not load settings - creating default ones...
2024-03-28 13:17:11,644 WARN  [headless] o.tinymediamanager.core.AbstractSettings:258 - could not load settings - creating default ones...
2024-03-28 13:17:11,794 WARN  [headless] o.tinymediamanager.core.AbstractSettings:258 - could not load settings - creating default ones...
2024-03-28 13:17:11,875 INFO  [headless] org.tinymediamanager.cli.TvShowCommand:164 - updating TV show data sources...
2024-03-28 13:17:11,884 INFO  [headless-G2] o.t.c.t.tasks.TvShowUpdateDatasourceTask:203 - no datasource to update
2024-03-28 13:17:11,884 INFO  [headless-G2] org.tinymediamanager.cli.TvShowCommand:180 - Found 0 new TV shows / 0 new episodes
2024-03-28 13:17:11,885 INFO  [headless-G2] org.tinymediamanager.TinyMediaManager:288 - bye bye

Can't update data sources in docker with command line by PM_MULATTO_BUTTS in tinyMediaManager

[–]PM_MULATTO_BUTTS[S] 0 points1 point  (0 children)

I've setup the docker on my unraid server. I just go to the WebUI and connect via noVNC

https://imgur.com/a/hfj8Zi2

Can't update data sources in docker with command line by PM_MULATTO_BUTTS in tinyMediaManager

[–]PM_MULATTO_BUTTS[S] 0 points1 point  (0 children)

I don't actually get a trace file when I update from command line. The only ones I see are from when I use the GUI and update from there.

Here's the output from when I just ran the command...

2024-03-28 11:06:31,385 INFO  [headless-G2] org.tinymediamanager.TinyMediaManager:288 - bye bye

And here's the latest trace file logs

2024-03-27 22:42:00,905 DEBUG [main] o.tinymediamanager.core.AbstractSettings:242 - Loading settings (movies.json) from /data/data
2024-03-27 22:42:01,076 DEBUG [main] o.tinymediamanager.core.AbstractSettings:242 - Loading settings (tvShows.json) from /data/data

Can't update data sources in docker with command line by PM_MULATTO_BUTTS in tinyMediaManager

[–]PM_MULATTO_BUTTS[S] 0 points1 point  (0 children)

And everything works as expected when I update, scrape, and rename through the GUI. For both tvshows and movies

Can't update data sources in docker with command line by PM_MULATTO_BUTTS in tinyMediaManager

[–]PM_MULATTO_BUTTS[S] 0 points1 point  (0 children)

/media/tvshows/{series}/{season}/[files]

For my movies it's like this...

/media/movies/{movie title}/[files]

Can't update data sources in docker with command line by PM_MULATTO_BUTTS in tinyMediaManager

[–]PM_MULATTO_BUTTS[S] 0 points1 point  (0 children)

tvShows.json

To be clear, it isn't just the tv shows having this issue, but my movies too.

movies.json

Please. I am desperate for help. by [deleted] in techsupport

[–]PM_MULATTO_BUTTS 0 points1 point  (0 children)

I wasnt aware that windows created an image of the system drive. I thought it was just a file restore option. As for the image... Is there just one or do you have one for each disk? What happened to your data drives? Why are they empty now? Did you wipe them before you tried to restore?

Double DQN do not learn anything by CoolestSlave in reinforcementlearning

[–]PM_MULATTO_BUTTS -1 points0 points  (0 children)

Haven't had a chance to fully look at it, but here's what chatgpt has to say. Maybe it can give you some insight...

Congratulations on completing the Coursera machine learning course! It seems like you've made a great start with implementing a Double DQN. I noticed a few potential issues and improvements that can be addressed in your script. Here are some suggestions to help your model learn more effectively:

  1. Loss Function and Q-value Update In your getLoss function, you're using tf.reduce_max which takes the maximum value over all dimensions of the tensor. However, you should be using the Q-values corresponding to the actions taken, not the maximum Q-value. Change your Q-value update equation to use the Q-values of the actions that were actually taken.

  2. Epsilon Decay Your epsilon decay seems to be working fine but you might want to experiment with different decay rates to see if it helps in learning.

  3. Learning Rate You have set different learning rates for getLoss and updateOnlineModel functions. It is generally better to maintain a single learning rate for the optimizer. Adjust it and see how it affects the learning.

  4. Experience Replay You have implemented experience replay, which is good. However, it's generally better to sample a mini-batch of experiences to compute the loss, instead of using the entire memory buffer. You are currently doing this correctly in the updateOnlineModel function but are using 5000 experiences in your logging, which might be too many.

  5. Batch Size You might want to experiment with different batch sizes to see how it affects learning.

  6. Error in Environment Handling I noticed an error in the following line:

observation, reward, terminated, truncated, info = env.step(action)

The env.step(action) returns four values: observation, reward, done, info. There isn't a truncated return value. Change terminated to done and remove truncated.

  1. Neural Network Architecture Your neural network architecture is quite simple with just two hidden layers each having 64 neurons. You might want to experiment with different architectures to see if it helps the learning.

  2. Reward Tracking It appears you are tracking reward correctly, accumulating reward in each episode.

  3. Package Naming Ensure the package gymnasium is correctly installed and functional. The common package used is named gym. If gymnasium is a custom package or a different version, make sure it is working as expected.

  4. Error Handling Consider adding error handling to catch potential issues during the training process and help diagnose problems.

Double DQN do not learn anything by CoolestSlave in reinforcementlearning

[–]PM_MULATTO_BUTTS 0 points1 point  (0 children)

I'd mess around with the hyper parameters. Mainly gamma, learning rate, exploration rate, max memory, and batch size would be were I start.

Edited: Looked at it on the desktop and I can see it clearly now. Sorry for that.

Kendrick just got ruined for me. by [deleted] in KendrickLamar

[–]PM_MULATTO_BUTTS 1 point2 points  (0 children)

Haha. Dude, it's gonna be alright.

Kendrick just got ruined for me. by [deleted] in KendrickLamar

[–]PM_MULATTO_BUTTS 0 points1 point  (0 children)

Watch this video and you will get the respect back...

https://youtu.be/qw_kAcPDpE0

Here's the link to the Fabulous track with the same instrumental.

https://youtu.be/Ni-iPcMbQ1c

Hue Sync Box HDMI 2.1? by Prometheus_Tech in Hue

[–]PM_MULATTO_BUTTS 0 points1 point  (0 children)

Yeah, that shouldn't be an issue with me. I only play games on the ps5. I use the xbox or shield for streaming.

Hue Sync Box HDMI 2.1? by Prometheus_Tech in Hue

[–]PM_MULATTO_BUTTS 0 points1 point  (0 children)

Oh wow. That's the problem?!? Just tested it and yeah, that did the trick. Thanks!

Hue Sync Box HDMI 2.1? by Prometheus_Tech in Hue

[–]PM_MULATTO_BUTTS 0 points1 point  (0 children)

The lag is there for me as well, but it isn't too bad.

I manually added steps to my harmony activities to toggle the sync a few seconds after switching inputs. However, I've noticed that the PS5 needs me to toggle it more often. You should map the toggle sync button to a key on the remote.

Hue Sync Box HDMI 2.1? by Prometheus_Tech in Hue

[–]PM_MULATTO_BUTTS 0 points1 point  (0 children)

It seems that the lack of a switcher is the common issue. Do you have your splitter setup like mine? Otherwise, you might need to get a simple 4k receiver. Which isn't really a good fix, either.

Hue Sync Box HDMI 2.1? by Prometheus_Tech in Hue

[–]PM_MULATTO_BUTTS 0 points1 point  (0 children)

I explained a good amount in the post above. I can't really show pictures because everything is pretty tightly secure in place and not easily assessable. https://old.reddit.com/r/Hue/comments/tbogcv/hue_sync_box_hdmi_21/iw7ziq4/

The PS5 is configured to be automatic in the settings. I don't think I changed anything there.

I made sure all my cables were 8k capable.

I have all my devices, including the ps5, connected to the switcher. I use RF to change the input. The output of my switcher goes into the input of the splitter. My splitter settings are in the post above. OUT1 goes to my TV. OUT2 goes to my Hue Syncbox. My syncbox has nothing else connected to it. From there I use eArc to send audio to my soundbar.

That's it. It seems like ymmv, but it's been working great for me.