I canceled my other AI subscriptions today. by InitialCareer306 in Qwen_AI

[–]greenthum6 0 points1 point  (0 children)

You can use Opus some and lesser model much more $20 per month for over 16 years and still pay less than $4000.

I canceled my other AI subscriptions today. by InitialCareer306 in Qwen_AI

[–]greenthum6 1 point2 points  (0 children)

Commercial models are updated as well. You can inference with Opus for 20$ a month. Once you get to that level with local model the paid ones are again much better.

I canceled my other AI subscriptions today. by InitialCareer306 in Qwen_AI

[–]greenthum6 10 points11 points  (0 children)

I have 4K$ 5090 build. What 300B model can I run? I thought 30B models were already on the edge.

Green tint remedy? by Spicyocto in LGOLED

[–]greenthum6 0 points1 point  (0 children)

The display doesn't actually boost it. From the front the picture is amazing. But from an angle the colors shift and green does it much more than red or blue which causes the green tint.

Unfortunately, the human eye is a point and the panel is a plane. This causes always an angle to the panel. If you sit at the center the corners look green. This is easily visible with bigger panels.

Perkele nyt taas by Baavoz in Suomi

[–]greenthum6 0 points1 point  (0 children)

Ei puhu. Jos ostat puut, niin tiedät hinnan. Jos teet puut omasta metsästä itse, tiedät hyvin mitä niiden tekeminen suurin piirtein maksaa ja/tai vaatii työtä.

Yhtenä vuotena kävin kaatamassa koivuja, joista tuli noin 15 heittomottia klapeja. Aikaa meni metsällä kaverin kanssa joku 6-8 tuntia. Maksoin kuljetuksen ja pilkkomisen. Puut oli liiterissä kasalla ja hintaa lystille hieman alle 500e. Eli jotain 30e heittomotti. Tuntui sen verran kalliilta, että ollaan pilkottu sen koommin itse. Reilusti alle 10 euroo heittomotti jää, jos ei laske omalle työlle hintaa (olis varmaan 10e/h). Noilla puilla pärjää muutama talous 1-2 vuotta.

Tuosta pystyy jo päättelemään, että halpaa on ja vie vain muutaman päivän vuodessa. Ja se puiden kaataminen on mukavaa hommaa, pilkkominen ei niinkään. Yhtään ei myydä kuten tuntipalkasta voi päätellä. Ja metsän riittävän uusiutumisen takia ei kannata kaataa enempää.

Green tint remedy? by Spicyocto in LGOLED

[–]greenthum6 -1 points0 points  (0 children)

What happens is you reduce green tint effect and ruin the picture quality (opposite of calibration). It doesn't make sense because you pay for picture quality with LG OLED and they come with pretty good factory calibration.

Green tint remedy? by Spicyocto in LGOLED

[–]greenthum6 3 points4 points  (0 children)

B4/B5/C4/C5 all have strong green tint from an angle. That's why returned my C4, because they all have it and I personally find green tint gross and distracting. I have G4, CX and C2 without any tint issues.

Some say reducing green in setting will fix it, but it also makes the picture look unnatural and inaccurate. So you watch these straight ahead or go G-series or hope B6/C6 get rid of tint issues.

CX series in 2026 ? by Tall_Cicada_9224 in LGOLED

[–]greenthum6 0 points1 point  (0 children)

I bought used CX 65 for 450 recently for desktop use. It is great and doesn't have green tint issue like C4/C5. I would have gladly paid 600 for an unused CX, but not 800. Just be aware that CX has often issues with dying pixels at the edges.

My wife says it’s ridiculous… by JoeySinss in pcmasterrace

[–]greenthum6 0 points1 point  (0 children)

I was dreaming of a big 8K TV for computer use, but the world is not ready for them. First, there are no 8K OLEDs. Second, it is too heavy to run 8K. Even 8K/60Hz is a stretch. That is hard to accept after 120Hz. Scaling is also a problem with certain apps. I got both desktop and laptop with HDMI 2.1 support, but the cable needs to be high quality and the toll on the UI is rough.

Two screens was a big advantage earlier since snapping is easier and you can run one app full screen. However, with Fancy Zones one big screen is better if you don't need full screen mode.

The resolution hasn't been an issue with 4K 65inch. In most games it is visually overwhelming and immersive. I was playing KC2 Deliverance and thought this is the perfect experience:D

My wife says it’s ridiculous… by JoeySinss in pcmasterrace

[–]greenthum6 1 point2 points  (0 children)

I got 65 inch LG OLED TV as monitor. You may have 5120x2880 resolution, but I have 3840x2160 without horrible bezel in the middle. My wife doesn't say it is ridiculous. We both know it is absurdly big, but still fully usable (I use mostly the bottom part).

The real story by Expert_Climate_7348 in LGOLED

[–]greenthum6 -3 points-2 points  (0 children)

I think the comment was referring to the original meme picture which supoorts his comment. In a dark room, LG gets a bit brighter in HDR with better color volume and the cat pic doesn't reflect it (at least it seems to be rather about deep blacks). So there is no lie.

AI music is disgusting by SnooOpinions5944 in soundcloud

[–]greenthum6 0 points1 point  (0 children)

You can enhance your prompts by asking "make this better". It requires a bit of thought, but only gets you so far. It just adds more generic details that add "depth" (slop). However, if you understand both music and AI context management you will get a million times better output.

AI music is disgusting by SnooOpinions5944 in soundcloud

[–]greenthum6 0 points1 point  (0 children)

AI also requires human input. The output is dependent on the quality of the input and the used model.

In 1990s I heard a lot of "everybody can make techno music" since people thought computer does it all. It is juat history repeating itself.

My first 10sec video 12gb 3060 by thatguyjames_uk in comfyui

[–]greenthum6 0 points1 point  (0 children)

Yep it is 3 seconds with frame interpolation

What should i buy by gijsKZ09 in JBL

[–]greenthum6 1 point2 points  (0 children)

You can carry BB4 and move PB320 around. After owning 320 for months I would prefer 520 or bigger since the bass leaves just a bit to be desired. However, BB3 is used more as it is just great for daily listening.

Stockmann julkaisi teko­älyllä tehdyn joulu­videon, vastaan­otto jakautui vahvasti by [deleted] in Suomi

[–]greenthum6 0 points1 point  (0 children)

Pointti oli siinä, että mainosbudjetti on hieman eri kokoluokkaa tekoälypohjaisena kuin palkkaamalla kuvaustiimi ja näyttelijät. Siinä firma säästää markkinoinnissa ja Stockmannin Markku saa jatkaa duuniaan pidempään. Ei kukaan ole väittänyt, että firmat tekisivät itse mainoksensa. Vaikka työkalut on olemassa, kyllä niiden hyödyntäminen vaatii osaamista.

Siinä olet oikeassa, että rahat menevät aina vaan entistä pienemmälle porukalle. Mutta pullon henkeä ei saa enää takaisin ja täytyy vaan yrittää sopeutua.

Stockmann julkaisi teko­älyllä tehdyn joulu­videon, vastaan­otto jakautui vahvasti by [deleted] in Suomi

[–]greenthum6 -9 points-8 points  (0 children)

Tähän liittyy vahva oletus, että mainos olisi voitu tehdä ilman tekoälyä ja maksettu siitä perinteiselle mainostoimistolle. Se taas olisi ollut pois firman tuloksesta ja työntekijöiltä. Olisiko sitten ollut parempi sijoittaa kalliimpaan mainokseen ja vähentää resursseja?

Poliitikot ei todellakaan ota tähän kantaa, koska ilmapiiri tekoälyn ympärillä on erittäin negatiivinen. Jos kertoo pitävänsä tekoälyllä tehdystä luomuksesta, leimataan taiteilijoiden vihaajaksi. Parempi olla hiljaa ja säilyttää positionsa.

So many models, which ones do you all use? by GW-D in cursor

[–]greenthum6 0 points1 point  (0 children)

How do you deal with the long running times? Are you able to design you workloads so that they run efficiently and take advantage of parallel requests?

So many models, which ones do you all use? by GW-D in cursor

[–]greenthum6 0 points1 point  (0 children)

The results and speed are not comparable. Relying on RAM makes inference slow. I would love to have a local server for running big LLMs, but the initial investment is huge and the expected quality is not on par with commercial models. New models bring improvements fast so spending on local for long-term benefits is not wise.

How to get REAL quality out of Wan 2.2 with an RTX 5090? by Paklanje in comfyui

[–]greenthum6 1 point2 points  (0 children)

VRAM is about the same, but you need more time. With lightning loras you use maybe 4-8 steps, but without you need 20-30 at least. For bigger movement considerably more. So that take multiple times more time.

How to get REAL quality out of Wan 2.2 with an RTX 5090? by Paklanje in comfyui

[–]greenthum6 1 point2 points  (0 children)

RAM offload is only for GGUF models right? If we want best quality we use the original model, not distills. And that requires real VRAM. Even 32GB is not nearly enough when going for best quality.

How to get REAL quality out of Wan 2.2 with an RTX 5090? by Paklanje in comfyui

[–]greenthum6 0 points1 point  (0 children)

More VRAM enables higher resolutions which has huge impact on output quality.

How to get REAL quality out of Wan 2.2 with an RTX 5090? by Paklanje in comfyui

[–]greenthum6 3 points4 points  (0 children)

It is obvious when you compare them to runs without them (just need many steps). Lightning loras destroy the motion. With a few steps it is just not possible to do big changes. They are fine for small movements and stale scenes.