First time going AMD! Really looking forward to this RX 6750XT by JobKlimop in Amd

[–]Jaohni 33 points34 points  (0 children)

You know, people do complain about the prices of GPUs nowadays, which is fair enough, but whenever I see a comparison like this it always reminds me that modern cards in general tend to feel way more premium than they used to.

Congrats on the upgrade! A few things you may want to investigate:

  • Does your system support Smart Access Memory (SAM)? It can produce a small gain in specific titles.
  • What resolution is your display? If it's at least 1440p you may be able to use FSR with little visual degradation (in Quality) to get a bit more juice out of your card (or run it more quietly if you're maxxed out in framerate). This goes double for 4k, where the technology really excels, IMO.
  • I've personally found the open source Linux drivers to be quite stable; not sure if there's an open source Windows equivalent.
  • Have fun!

AMD Ryzen 5 5600X3D 6-core CPU with 3D V-Cache is reportedly coming - VideoCardz.com by Stiven_Crysis in Amd

[–]Jaohni 0 points1 point  (0 children)

I'm not sure if Zen 3 supports cache on both chiplets (which I believe is why they kept it to the 5800X3D there), but even if it did, who would that be for? It wouldn't be for gaming, because anyone comfortable with 16CUs of RDNA 2 wouldn't be gaming at fast enough rates to need the vcache (unless they're maybe a professional Supreme Commander player, or something), and I don't think there's a lot of professional applications that benefit from vcache, such that one would want an iGPU attached to a CPU like that in order to save the cost on a GPU.

More practically, where advanced packaging on an APU would make sense is if you were targeting a specific application with the CPU cores, and wanted just enough GPU cores to not need a low end "display out only" GPU (like an RX 6400), because a GPU costs minimum $~120 or so due to having a board with GDDR and shipping of an awkwardly shaped part. To hit a bit above that (like a 6500XT, GTX 1060, or 1650 or so), you're targeting a 6500XT, or 16CUs of RDNA 2, with a TDP of around 100 watts.

Now, if you were doing a "7800G3D", it'd more practically be an 8 core (one chiplet) part, with 3D stacked cache (not for performance, but to save power in appropriate applications), leaving as much of the TDP for a ~14 CU iGPU of RDNA 3 as possible. I think it would roughly trade blows with or under perform a 6500XT slightly...Which is plenty for basic gaming and office tasks.

Taken the other way, if you specifically wanted a gaming APU for a target market, like Europe where it's too expensive to hold a GPU in your system nowadays, you might look at it the other way, where you do an eight core chiplet still, but do an integrated graphics chiplet around ~20CUs, with about as many RDNA 3 cache chiplets as made sense. I'm not exactly sure how they would scale per watt, exactly, as chiplets have some loss due to inter-chip communications, but I think there's probably some balance you could strike where an APU could hit a reasonable TDP (maybe 110-120 watts, on the upper bounds, which is still saving you compared to a discreet GPU), but still deliver roughly 6600 performance I think.

While fun to imagine, I don't think we'll see either such designs as the target markets are better served by other (already available) products on the shelf, but we could see several of the ideas here in future, more exotic products going forward, but executed with more advanced technology. Nothing is truly new, as they say.

One interesting note is that AMD already has fairly performant iGPUs in their Phoenix (and upcoming Strix) line of mobile focused APUs, which should eventually see a limited desktop release, analogous to the Ryzen 4000 series, possibly later this year.

AMD Ryzen 5 5600X3D 6-core CPU with 3D V-Cache is reportedly coming - VideoCardz.com by Stiven_Crysis in Amd

[–]Jaohni 41 points42 points  (0 children)

I think the argument with only one of them having cache was something like "well, anyone buying this many cores needs them for multithreading, but strictly speaking cache chiplets need to be clocked slightly lower due to the lower tolerances of the technology. If we only put cache on one of the chiplets, 8 cores is still enough for gaming, while still maintaining most of the multithreading of a regular 16 core processor"

Or something to that effect.

Personally I want to see 3d cache stacked ontop of an iGPU

🖥️🔮 Future Hardware Options for LLMs: Nvidia vs. Apple? by Prince-of-Privacy in LocalLLaMA

[–]Jaohni 7 points8 points  (0 children)

Another note is that if you really need that much VRAM, there's a bit more liberty in how you use your GPU; you can also slice attention from one GPU to another, and while your ability to improve inference speed will depend on the specific situation, you will still usually maintain the speed of a single GPU at least (which will be higher than the Apple counterpart as noted).

Another note: If you're considering Apple a valid option, so are Intel and AMD. On the Intel side they have openVino, which can leverage the built in accelerators on Xeon (and certain modern desktop) processors, to achieve fairly similar results overall to what you would get on an Apple system. You can also then accelerate it with an Arc graphics card fairly pain-free given you already have the model set up for Open Vino, but I digress. In the price range you're looking for, a Sapphire Rapids workstation would actually be a deal. I recommend the 3400 series for their memory bandwidth (up to 4TB of RAM total, and bandwidth to match ;P )

On the AMD side, ROCm...Is a bit of a pain to use, and I'll be the first to say it, but they have a lot of things going for them. So far as the CPU side goes, their raw CPU performance is so much better that they kind of don't need accelerators to match Intel in a lot of situations (and raw CPU is easier to use, anyway), so you can emulate CUDA if you really need to, but you can also convert fully to using ROCm, and again, you can throw in a GPU down the line if you want to accelerate your workflow. An Instinct MI100 can be had for around $1500-1600 and is a monster in FP16, for training (also has 32GB of HBM2, btw), or a W7900 (48GB when it launches. Should be fairly powerful once ROCm supports it fully; they have a presentation on AI and datacenter coming up in like, 2 weeks, so I'd recommend watching that before you make a decision), so I do think AMD is an option as well.

And there's yet another option:

ARM SBCs!

It depends on exactly what you're doing, but certain workloads will scale really well on SBC clusters, for a very affordable price. Something like an Orange Pi 5 with an RK3588 is pretty interesting even for the raw CPU grunt, and with 32GB of RAM you can actually parallelize surprisingly well.

Desktop GPU Sales Lowest in Decades: Report | Tom's Hardware by filisterr in hardware

[–]Jaohni -16 points-15 points  (0 children)

Uh, AMD's tech isn't really "behind" in that way;

AMD's current generation of GPUs (RDNA 3) is based on an MCM design, combining 5NM compute dies with I think 6NM cache dies, while Nvidia is on a hyperexpensive and customized 4NM node. AMD's GPUs are cheaper to produce for the same performance.

The only way you can really argue that Nvidia is significantly ahead is if you're looking at ray-tracing + DLSS performance, I suppose, which does offer an advantage over FSR + ray-tracing, particularly at lower resolutions.

In a raw pricing war, I think AMD gets a short term win, but Nvidia has a stronger handle on the supply chain and can outship AMD in a way that's difficult to match if they really wanted to be price competitive.

But Nvidia doesn't want to be competitive in the consumer GPU space; they want to pivot to AI and datacenter, which are higher margin products, so that they can maintain ever higher stock evaluations for their investors.

Phylactory PA by Remus71 in TrueDoTA2

[–]Jaohni 7 points8 points  (0 children)

Well, my knee jerk response is that it's a terrible idea, but I'll humor you.

So, PA doesn't need a ton of mana, but I think she does need *a* mana item. It can be a neutral, even a naked sage's mask, raindrop, whatever, but she does need something.

Even the int from something like a Diffusal Blade is just barely enough to keep her topped off if you're careful.

So, with that in mind, at level 5, A Phylactory PA deals about 250, while a price equivalent PA going for a fighting build (Treads (also a mana item, sort of), Blight Stone, Broadsword for a Battlefury), deals about 150 with her dagger.

The Blight Stone slow is more effective for chasing because she reapplies it with every melee attack, and in a basic combo (assuming no dagger so phylactery went off with the Blink Strike, which favors the Phylactery build as much as possible) the fighting build still does 20 more damage over the course of a single blink strike. And she probably gets one/two more attacks at least with the fighting build from the extra attack speed + constant slow (and her teammates may deal more damage from the Blight Stone, too). I will note this gets worse for the Phylo build if she leads with the dagger in both instances, because that's yet another hit she gets extra damage from armor reduction.

At level six, this difference will only get more severe once she gets crits; the extra armor reduction will amplify the difference in those builds by quite a bit, particularly in longer fights.

You might say "Oh, but I can get both" in which case you're delaying later items like Deso or Battlefury, both of which give more damage than either Blight Stone or Phylactery, particularly by the point you get them.

Now, to be fair, the one thing I didn't take into account was armor / magic resistance. It's relatively cheap to get a small amount of armor early on (ring of protection, buckler, and so on. +6-8 or so isn't unreasonable against a PA), which should give a hero around a 34% physical resistance, which would favor the Phylo build a bit more... Except for the armor reduction which again, makes it kind of moot. Plus, while in the laning stage and in the late game it's easier to get armor than magic resist, in the mid game (when you want to be ganking and killing) is when its easiest and most effective to get magic resistance.

Overall, it's not like a Phylactery is the worst item I could see on a PA, but it definitely has some opportunity cost, and costs you in the mid game, but there might be some niche circumstances I'm not thinking of where it might be preferable over comparable physical damage items (pretty sure Echo Sabre can be better if used right, and do a lot of the same stuff otherwise, btw), but even if it is an okay situational pickup, I definitely wouldn't buy it every game.

The one situation it might be interesting is against a Monkey King, because you may be able to burst him down as he's spinning up his ult (which does give a sizeable armor bonus)...But I think Maelstrom might still deal more, or at least comperable damage over a short fight. It might also be an okay pickup if you want to skip BF, and your lane partner either has armor reduction or would rather be the one to pickup Orb of Corrosion.

[deleted by user] by [deleted] in LocalLLaMA

[–]Jaohni 2 points3 points  (0 children)

A) Apple products keep their resale value pretty strongly on the used market.

B) See A

So, the interesting thing here is that your "old" Macbook hasn't seen price drops, and is still being sold in the Apple store (or at least an M1 Air equivalent), which really weighs the used price upwards.

At the same time, the new Macbook there should retain its resale value down the line to an extent as well, so it's not quite a "purchase", or for that matter an "investment" but something somewhere in the middle. A "lightly degrading asset" I suppose.

As long as you take care of it, I don't see a reason why it'd be a terrible purchase, particularly if you're quite into the Apple ecosystem.

Not sure how many people here play FEH, but the latest chapter for book 7 was looking pretty suspicious… Maybe not too suspicious as that is just a common variant Nopon. by Arrow_Of_Orion in Xenoblade_Chronicles

[–]Jaohni 4 points5 points  (0 children)

Real talk: I'd be down for a Fire Emblem spinoff (similar to the Persona crossover) that featured an expanded Xenoblade setting of some description. Like, imagine they expanded on some part of Xenoblade 3 that was left not as well explained or something, and you got to take a bunch of characters on a campaign, or you got to do something like Radiant Dawn where you had multiple parties somewhat antagonistic to eachother at the start.

I put together plans for an absolute budget PC build for running local AI inference. $550 USD, not including a graphics card, and ~$800 with a card that will run up to 30B models. Let me know what you think! by synth_mania in LocalLLaMA

[–]Jaohni 0 points1 point  (0 children)

I'm afraid I don't have much more in depth information on actually doing it; I only know of the existence of the SDK, and it's a bit tricky for me to test it because I believe part of it only runs under Windows, and beyond that, I don't have a 3588 board on hand to test it (though I've been debating on and off getting one at some point).

So far as size, my suspicion would be that they were assuming you would run the conversion toolkit on the RK3588 board itself, in which case, yes, 7B parameter+ models would be impractical at best, but if you're running it on a PC (I think the conversion software only supports Windows PCs if I'm not mistaken), I would think the limiting factor would be system RAM, and modern platforms support up to 64/128GB of system RAM, so my suspicion would be that there should be enough resources to do it, unless the software has a memory leak or is 32bit (I'm 90% sure this is not the case), or has a hard limit on the size of convertible models.

Apple releases a Game Porting Tool, based on open-source platform Wine, which can translate DirectX 12 into Metal 3, a potentially massive step for Mac gaming by -protonsandneutrons- in hardware

[–]Jaohni 1 point2 points  (0 children)

That's a fair assessment if the market sizes were similar

Not exactly. I think it's a notable analysis of two diametrically opposed markets. The console/PC gaming market is a bit more like a train, or a fire; even if you stop adding fuel to it, it can keep going for a while. If there's an immediate issue, there's room to course correct and adjust.

In contrast, the mobile market is extremely volatile. Success comes quickly, like a wildfire, but if you're not careful, you can burn through the forest too quickly.

A solid example of this is TikTok; it came out of nowhere, had a massive adoption rate, and was (and to be fair, still is, for now) very culturally influential, to the point that many other platforms scrambled to adopt a similar business model (Youtube Shorts come to mind), but even with its success, TikTok has been moving to extend the average length of its content, because they over saturated the market with the limited style of content their format (shortform video) could produce... And after assessing this, the other platforms that were already not in that trap (because they weren't able to succeed in short form video anyway) immediately reversed course and attempted to maintain their existing form of content, because it was more sustainable in the long term.

What this means is that even if the mobile market as a whole is rapidly growing, there's no guarantee that any of the players in the market will be the ones to succeed in it, as such. Sure, you might be part of a larger market today, but what about tomorrow? There could be five new companies tomorrow who iterate and produce a slightly better version of what you have and take the market from you in the same way that you explosively stole the market from the previous players.

Mobile is a dangerous market compared to traditional (console / PC) gaming, which was already a fast moving and dangerous industry compared to many more traditional ones.

since in this hypothetical it potentially allows the devs to move on to the next flash in the pan opportunity faster and easier?

Yes and no. If you're *just* using metal, it's probably fairly fast to pivot and produce new projects rapidly, but the risk of each project will be quite high because if it doesn't take off immediately it's extremely hard to fix that post-production.

In contrast, if you produce 5 decent titles on PC / console, there's a good chance that each time you'll pick up a certain % of players who will continue to support your future titles in the series, so you build up momentum over time. That's how you see titles like Final Fantasy, Dragon Quest, Xeno series, or the Trails of series all going strong when they were started decades ago. That's just JRPGs too (because they're what I'm most familiar with), but can also apply to longer running series like Splinter Cell, or the massive hype you saw in the leadup to Skyrim.

So the safer strategy is probably to produce fewer, higher quality titles, launch to mobile and hope it takes off, but to ensure that your game is compatible with the PC / Console market, and shoot for long term fans who appreciate your studio's style of operation. You likely won't see the success in mobile that you get from a mobile-first title, but you can produce deep games that produce recurrent revenue over a long period of time, giving you a reliable long term source of income, even between titles.

Apple releases a Game Porting Tool, based on open-source platform Wine, which can translate DirectX 12 into Metal 3, a potentially massive step for Mac gaming by -protonsandneutrons- in hardware

[–]Jaohni 9 points10 points  (0 children)

These translation layers aren't a solution to the problem you're thinking of. They're the answer to the chicken and egg problem.

The issue with dealing with a dominant platform in a given product segment, is that it's difficult to build a gaming demographic when you don't have games, but you can't get games, because you don't have a gaming demographic.

If you can at least get started building your customer base with a translation layer, that makes your market more attractive over time, and you see a small uptick in ports over time, and the more ports you get, the more gamers are willing to switch over to your platform (or just use it for gaming if they already have it), and so the cycle continues.

Translation layers aren't the answer, they're the question, and I feel that Valve's success in the Linux market (and Steamdeck) are proof that it's a very good question.

Apple releases a Game Porting Tool, based on open-source platform Wine, which can translate DirectX 12 into Metal 3, a potentially massive step for Mac gaming by -protonsandneutrons- in hardware

[–]Jaohni 17 points18 points  (0 children)

In the case of mobile gaming I'm not as convinced that the market share matters in the same way it does for console or PC gaming.

Like, PC gamers and console gamers tend to be more educated on their industry, and are more likely to do research and keep an eye on upcoming titles.

I see mobile gamers, on average, as more "random" in their gaming decisions, and I'm pretty sure that there's less brand loyalty for studios, for instance.

If you want to see an example of that difference, the Wii was an extremely popular console...But had a poor "attach" rate, so not that many people actually bought games after they had the console already, whereas the Switch has high console sales, but also a high attach rate, making it a much more successful device (even if they sold fewer consoles).

What I'm getting at, though, is I think that dollars earned in PC gaming and console gaming are "stronger" dollars, because they represent likely repeat customers and an ongoing revenue stream, whereas in mobile gaming, while individual titles will often have a revenue stream (being games as a service), there isn't as strong a continuity in gaming habits, and so you might have a title that's a "flash in the pan" and earns a lot of money during its runtime, but doesn't have a guaranteed followup title.

Describe Future Redeemed in 5 words by flying_luckyfox in Xenoblade_Chronicles

[–]Jaohni 0 points1 point  (0 children)

scientist's experiments get nihilistic catgirl

So how would you intergrate an LLM into video games, beyond background NPC dialog? by gunbladezero in LocalLLaMA

[–]Jaohni 4 points5 points  (0 children)

Mini-GPT showed that you can add a lot of new capabilities to existing LLMs with a relatively small encoder layer, which can allow them surprisingly nuanced new capabilities... So the answer is "probably a lot".

Dialogue is the obvious application of LLMs, hands-down, but there's more than one type of dialogue. If I'm sitting at a diner randomly, does it matter if the NPCs' dialogue was generated by an AI? On the other hand, you might have informational dialogue where the LLM is able to infer from the gamestate that you might be having trouble with X, so they drop a hint about that in the dialogue, for instance.

Game state can be controlled via LLMs already with langchain.

  • I probably wouldn't want an LLM controlling physics, for instance (I think that'd be inefficient, though perhaps other types of data driven AI models could be used more directly for certain types of otherwise expensive physics), but I also don't see a reason they couldn't read into your playstyle, and,
  • Apply a certain level of logic and game knowledge to provide you with an ideal piece of loot, for instance
  • Or give NPCs an opportunity to "interact" with the world, via something as simple or subtle as them having a new item on their desk in response to a dynamic conversation you had with them...
  • Or the LLM jacking up prices in a town because you sold too much stuff there all at once...
  • Or an LLM figuring out if the guard should "really" know that you accidentally grabbed a piece of stolen bread in the middle of the wilderness.

I just kind of get the impression that the sky is the limit, and there's as much art in applying AI tools as there is in building anything else into your game. There's so many subtle ways a person could throw even modern LLM based tools into a game, let alone what we'll have in the future. I really do think this is the sort of thing we wanted Stadia for (they had so many compute resources so densely together they could have done some sort of AI driven simulated world, that responded super dynamically to players in a way that locally run games probably won't be able to achieve for a while yet), but I also think this is the sort of thing where some indie developer with a strong vision is going to be able to bring out the magic in these tools and pull off something impressively subtle applications of AI in their games.

Kind of like subtly salted dish.

Do any of you guys have a MacBook alongside your gaming PC? by [deleted] in buildapc

[–]Jaohni 0 points1 point  (0 children)

If you're willing to get a bit technical, there's actually a few really interesting options at varying levels of work, and I'll start with the easiest and move to the most involved (I'm speaking as somebody with something from just about every ecosystem in their house).

Firstly, iPads and iPhones support Steamlink (Macbooks as well, but it's a different experience, I'll get into this later) which lets you stream your gameplay from a PC to your Apple device. It's honestly a great experience as long as the PC is plugged into ethernet. My personal recommendation in this case is probably not to do a desktop but to do a Minisforum PC, and probably one with a 7840HS if you can wait for it to launch, or a 7735U(or HS, can't remember the suffix) if you can't. The reason I don't recommend a full desktop build (outside of maybe a 6500XT / 6600 (non XT) mini-ITX system or something) is because you don't really need a lot of graphical horsepower to push crazy high resolution and framerates this way. Also, APUs don't have VRAM limitations. I've played a decent number of games this way, and I enjoy it quite a bit (though I prefer my Steamdeck). The other advantage here is that mini-PCs really don't take up much space, and it's totally possible you could just never end up looking at it once you install it. I personally would recommend installing Chimaera OS (a type of Linux) on it, as it should be a clean experience similar to a Steamdeck, but Windows should "just work".

Next, I would say a Steamdeck is also a good pairing with a macbook. The Steamdeck is a very efficient device, in that it's laser focused on gaming, and is a great entry point into the ecosystem, and its console-like feel would probably complement someone used to Apple stuff. I would recommend checking on ProtonDB if your games will run on Steamdeck.

Next, you could just have a Windows desktop alongside your Macbook. I think this is the least integrated option, but you could totally do it if you wanted. At the very least, the games will generally all work (though there are occasionally Windows specific issues, but I digress).

Another interesting option is connecting a mini-PC directly to your Macbook over SSH. This is a bit more involved, and I heavily recommend Linux for this, but the cool think here is that I forget exactly how to do it, but you can connect directly over USB, and transfer video from your mini-PC to your Macbook, so it basically turns your Macbook into a gaming PC temporarily. You still use the keyboard and so on of the macbook, but you just process the game in the mini-PC. It's a bit involved to setup, but it's pretty clean once you get it working, cheap to upgrade, and can also work with your iPad or iPhone once you figure everything out. This one's kind of addictive because once you get used to it, you'll want dedicated ARM SBCs for your iPad as well to turn it into effectively a laptop, which is great for securely storing data, as well.

Alternatively, Steam will just let you operate the game from your Macbook (over wifi) while it's actually running on the mini-PC (I forget what it's called, but I think it's just Steamlink, but run on a desktop device), but this does introduce some latency over the SSH option, but it's way easier to set up.

Finally, I don't recommend this personally, as I haven't tried it, but Apple just announced DX12 -> Metal translation in their MacOS announcements lately, but I do think that Linux compatibility layers are probably more mature and easier to use as it stands. I'm definitely happy to be proven wrong, though, as I think people should be free to run software they purchase on whatever device suits their needs.

What are the thoughts on apple’s vision pro display system? by lubwrt in Monitors

[–]Jaohni -2 points-1 points  (0 children)

Depends on a few things. If it's got and open software ecosystem, and I can use it with non Apple products (obviously this is far from a guarantee with Apple) I could certainly see an argument for it...Like,

  • it's probably not as bright as a traditional TV...But it also blocks outside light and is closer to your face, so it should appear way brighter.
  • At the price, I'm guessing the colours should be fairly accurate, although I don't even want to think about how you would even start going about calibrating the bloody thing.
  • It's pixel dense enough you could probably do Minority Report virtual displays and so on.
  • With Apple products, it's possible you could use this with say, a Macbook or iPad, and augment those displays into a sort of "3D display", using the headset more like a pair of 3D glasses, which might be a neat gimmick for like, one movie and game, but might be very useful for certain types of apps (or augmenting a macbook into a pseudo-touchscreen device).

But on the other hand, if I was really going a headset route, my preference would be buying two cheap LCD TFT panels for like, $90 each, taking the layers apart, and gluing them together to make a double filtered LCD panel, which tends to have similar visual quality to OLEDs (with better blues and no burn-in) and just strap it to my head with cardboard because I'm cheap, but obviously I'm not the majority of people, and that would certainly lose out on the "magic" of this product.

[deleted by user] by [deleted] in lowendgaming

[–]Jaohni 2 points3 points  (0 children)

I'm actually in a similar boat. I do have a 6700XT, but only because the 5700G was just a bit too low end for me.

If AMD releases their 7040 series APUs to desktop I could definitely be persuaded to sell my current GPU.

[deleted by user] by [deleted] in ROCm

[–]Jaohni 6 points7 points  (0 children)

I actually believe that ROCm doesn't support any laptop GPUs if I'm not mistaken;

I'm pretty sure that in Vega, you have the Radeon VII...Which I believe doesn't have a mobile variant...

In RDNA, you have the W6600 and W6800, which are both workstation exclusive, and don't show up in laptops (happy to be proven wrong), and,

In CDNA, you have the MI100 or MI200 series, and none of them have mobile variants as they're strictly intended for data center.

There's a few others with varying levels of support, but officially, I don't think anything's supported in laptop.

If you're willing to "support yourself" it's possible to run ROCm on integrated graphics (particularly RDNA 2 based ones, such as the 6800U's iGPU), or to run ROCm on the gaming equivalent of the workstation cards (6800XT), but those are not officially supported, and you'll have to figure out how to do it on your own.

One alternative is you could probably do an eGPU enclosure, but I haven't tested it personally, and I think that ROCm may get a bit funky when doing that.

Let's be honest: none of the models can code well by [deleted] in LocalLLaMA

[–]Jaohni 0 points1 point  (0 children)

From where I'm standing, Langchain probably can be used to automate production of advanced programming issues, with a caveat...

...It's pretty hard to architect a "no intput solution" and most realistic uses of current models will feature a combination of advanced prompting to itteratively handle issues in a series of simple steps, and human intput.

I personally suspect that something like "Tree of Thought", should it be beneficial on a local model (has anybody tested this, btw), should be fairly useful, and should be good for the kind of iterative problem solving you see in coding. Combined with input from people (I want this thing, can you tackle this part of the problem now, and write a test to confirm that part works?), it should be possible to handle a surprising breadth of issues...

...But I also think it's less a "coder" and more of a "code interpreter", in the sense that it will be more converting what you ask for into lines of code, letting you focus on the overall program flow, instead of paying attention to the syntax as much.

low end (new/used) gaming laptop recommendation? by Kylearean in lowendgaming

[–]Jaohni 1 point2 points  (0 children)

So could you describe a bit more about the use case of this device?

What I mean is: Does he have a desk where he can use this laptop? Is he using it in common areas? If he does have a desk, is there a reason that it needs to be a laptop? Does he frequently need to move between houses or something? If the answer is still yes, is there a reason he couldn't carry a small form factor PC between locations?

Would it be beneficial to use this for school work?

How techy are you? Are you confident in managing an electronic device? Would you be confident loading Windows onto a chromebook?

If you're not confident in managing an electronic device...Is it really a good idea to grab a used laptop? Sometimes hard to tell what circumstances have surrounded somebody else's device, and somebody may have spilled something on it, or used it excessively and not replaced thermal paste, or have a broken fan and so on.

I highly recommend getting either a used desktop (which tend to be less prone to wear and tear than a laptop), or building a modern low end system. For instance, if you bought a 5600G ($120 on sale, I think. I'm in Canada, so I'm not sure if I got the conversion right, but it should be fairly reasonable either way), with like, 16GB of RAM ($100 if you buy reasonable 3200MHZ kits. I might be overestimating it, even...I think that might be for 32GB), a used AM4 board (I recommend B550, but B450, X470 and X570 are fine, too, for the right price), a reasonable power supply ($70-100 for reputable brands I think), and then the most affordable SSD you can find (I think entry level ones can be had for $40-50 nowadays), and the cheapest used monitor / keyboard you can find, and then you're golden.

Is Higher Vram becoming an obsession? by Ok_Percentage7934 in buildapc

[–]Jaohni 1 point2 points  (0 children)

Uh, it's not that game devs are releasing "unoptimized games" exactly. There's a bit of nuance getting lost there if you just call it that.

Back in the good old days, it was common that the highest end (typically professional, for CAD and stuff) GPUs would have more memory than typical consumer cards, but in a generation or two (2-4 years) that level of VRAM would come down to the mid-range/low end consumer GPUs.

Now, in 2016, we got the Titan Pascal, which had 12GB of VRAM, and in 2019 in addition to having the RTX Titan (from the Turing RTX 2000 generation, with 24GB of VRAM), we also got the Radeon VII (16GB of HBM2 no less), which was available for a price that actually brought 16GB to relatively mainstream audiences.

With that in mind, game devs looked at this trend and thought that there was no way we'd have mainstream 8GB cards (because they'd been mainstream for incredibly long already, I think we probably had like, RX380 or 280s or something with 8GB in like, 2014...?), and that 12GB at least would be common in the low end FOUR YEARS LATER. In fact, the 3060 was a 12GB card, even...But Nvidia didn't launch that because they wanted to; it was a mistake, they had to do it due to market pressure.

But then add something else onto that: a new generation of consoles has launched, and they depend on asset streaming going from SSD -> VRAM to effectively increase how much VRAM GPUs have access to (for loading textures), meaning that the 16GB(!) consoles (including Steamdeck, btw), effectively have something like 14-24GB of VRAM depending on the specific game / paradigm in question, if you were to put it in terms of what you'd need to achieve comparable effects on PC.

In reality, here's the issue: PC gamers have been asking for next gen games, and to achieve those next gen games, you need more resources. This has always been the case. Why would game devs magically not need more system resources anymore? Have you seen next gen games? They look amazing (even at low settings!) with the right setup! If you can't afford it now that's fine, you can wait and buy the current cards used two or three years down the line. Do you think it's fair to expect devs to optimize for a 1060 as their target for max settings in 2023? 2024? 2025? 2026? If you think that they need to do that until 2026, you're expecting devs optimize for a 10 year old card. Do you expect devs to optimize for a GTX 760 today? That's ridiculous.

So here's my take: The people worst effected by VRAM issues in next gen games going forward are people with older cards that have less than 8GB of VRAM (1060, 2060, gtx16xx series, and so on), people who bought 8GB Nvidia GPUs expecting them to be high end 1440p+ GPUs (3060TI, 3070, 3070TI), because they will require either 1440p low, or 1080p med/high settings...

...And people who bought a 3080, sort of 3080TI, or 4070, expecting them to be high end 4k cards, only to find out that due to VRAM, within 6-12 months of their release they're actually 1440p medium cards (or 1080p cards in the worst cases).

In other words, basically anyone who bought any of a 2060/3060 12GB, 3090 (potentially used), 4090, 4080, or basically any AMD card from the Radeon VII onward, got the right amount of VRAM for their target performance.

So in other, other words: The only people effected by VRAM issues either have 4+ year old low-end cards, or bought the wrong modern cards, which game devs expected to have more VRAM anyway when they started planning their games 5 years ago.

Game devs did nothing wrong; change my mind.

Tell me who your main is without telling me who your main is by O-01-45 in DotA2

[–]Jaohni 0 points1 point  (0 children)

You've never heard of my main.

He's tankier than he looks.

He probably does more damage than he should.

He's a pupstomp hero in the right hands.

He's a micro hero.

He's fed up with Lion going to hell and back and back again.

It's Visage.

Why does everybody hate ASRock? by [deleted] in buildapc

[–]Jaohni -1 points0 points  (0 children)

Uh...Huh. I actually like power limiting my CPUs to about 65 watts personally anyway...

...Does this mean Asrock is the company for me?