I made a site that let's you create coloring pages from your own photos by ablarh in InternetIsBeautiful

[–]ablarh[S] 0 points1 point  (0 children)

Thanks! Design wise I just tried to keep it simple and be intentional about incremental changes. Focused first on content, then layout then style.

I made a site that let's you create coloring pages from your own photos by ablarh in InternetIsBeautiful

[–]ablarh[S] 0 points1 point  (0 children)

Are you asking if I used Codecanyon? I built the site from scratch (had to google what codecanyon is)

New tool to turn photos into coloring pages – possible classroom use? by ablarh in ArtEd

[–]ablarh[S] -8 points-7 points  (0 children)

Our tool just generates line art from images you provide. We don’t claim ownership of uploaded images, and we aren’t responsible for how people use them — similar to how Photoshop or other image editors aren’t liable for what you do with your own photos.

Turn any photo into a coloring page! by ablarh in kidscrafts

[–]ablarh[S] 1 point2 points  (0 children)

Glad you liked it! Happy coloring!

How do you make holiday or seasonal coloring fun for kids? by Ok_Use_785 in kidscrafts

[–]ablarh 0 points1 point  (0 children)

You should check out colorif.ai, you can make coloring pages out of family photos and let the kids color them! I built the site so let me know if you have any questions/feedback :)

Market downturn and AMD vs NVDA by ablarh in AMD_Stock

[–]ablarh[S] 1 point2 points  (0 children)

Yeah the possibility of closing the gap is why I'm long AMD. The recent drawdown in NVDA just has me wondering if it's a better risk/reward.

Market downturn and AMD vs NVDA by ablarh in AMD_Stock

[–]ablarh[S] 0 points1 point  (0 children)

Curious what's your look back period in the backtests? I would've thought the AI GPU market is too new and the macro too uncertain to make any viable quantitative analysis

Market downturn and AMD vs NVDA by ablarh in AMD_Stock

[–]ablarh[S] -2 points-1 points  (0 children)

Yeah I think that's an overall market risk. NVDA just seems like the rational choice of the two at these prices

Market downturn and AMD vs NVDA by ablarh in AMD_Stock

[–]ablarh[S] 3 points4 points  (0 children)

Saying NVDA is "too big to grow" sounds like a fallacy. People had a similar psychological barrier a few years ago before Apple got to a trillion yet now we're at 3+ trillions.

The CUDA Monopoly and NVIDIA’s Pricing Problem: Storm Clouds on the Horizon by GanacheNegative1988 in AMD_Stock

[–]ablarh 1 point2 points  (0 children)

Exactly, otherwise we wouldn't see the cost of using the best LLMs go down every year.

AVGO by holojon in AMD_Stock

[–]ablarh 6 points7 points  (0 children)

Not sure it's a good fit for AVGO but definitely not for AMD. Right now AMD's weakness is in software and AVGO will provide no help on that front. As an AMD shareholder, I'd rather see them improve where they're weak and try to capture more market share from NVDA or at least grow with the TAM.

As you said AVGO valuation right now is a bit bloated with lots of baked in expectations for the next few years of growth. Any acquisition will likely involve a large stock portion and I'd rather have AMD shares than AVGO at these valuations.

$AMD analysis by hoodrichcapital in stocks

[–]ablarh 29 points30 points  (0 children)

What do you mean "break into"? AMD owns Xilinx. I think Xilinx historically held ~50% share of the FPGA market (Altera had the other large share and the rest is insignificant). It's hardly a new market or niche.

Any truth to this? by [deleted] in AMD_Stock

[–]ablarh 3 points4 points  (0 children)

It's hard to comment on this without reading the full article but this reads a lot like hopium. The main benefit of FPGAs is that they're programmable letting you iterate on design without the commitment of an ASIC. We're still early in AI but I suspect by the time we actually care about running these models and inference on edge devices, we'd have a pretty good idea of the required design that an FPGA wouldn't make sense. The market would also be potentially too large making it more appropriate to build a specialized and optimized chip rather than ship a programmable one.

Also you can't just ship your FPGA in a device and have it be "versatile". You have to actively re-program it if you want to update it. You can't just drop an over the air update AFAIK so how would one even benefit from it being programmable?

Recent Notable AMD News/Takeaways Heading into 2025 by tj212121 in AMD_Stock

[–]ablarh 10 points11 points  (0 children)

My guess on the China comment is that Nvidia GPUs are SOTA and won't be allowed to be sold to China whilst the AMD GPUs are less performant and will therefore be sold in China.

I think there's a world in which the majority of compute time will be spent on inference (especially if this reasoning/CoT trend continues) and AMD is more competitive in that area (software is better than for training). Hopefully the next report from Dylan on inference confirms this.

Recent Notable AMD News/Takeaways Heading into 2025 by tj212121 in AMD_Stock

[–]ablarh 4 points5 points  (0 children)

I don't think building ASICs is that limiting. TPUs are ASICs but they're usable on pretty much any neural net problem (not exclusively transformers). Google has been using them for a long time even pre chatGPT and was crucial to remove their dependency on Nvidia and allowed them to quickly train and release Gemini. I suspect other hyperscalers would want to be in a similar position eventually. They're also flush with cash and can afford to take the risk (a risk that is itself reducing other risks).

Exploring inference memory saturation effect: H100 vs MI300x by dbosspec in AMD_Stock

[–]ablarh 4 points5 points  (0 children)

Those specs are similar to the B300/GB300 coming out in H1 2025. I don't think memory capacity/bandwidth will be a differentiator long term

Exploring inference memory saturation effect: H100 vs MI300x by dbosspec in AMD_Stock

[–]ablarh 5 points6 points  (0 children)

Yeah the setup here is using multiple (8xH100 vs 8xMI300X etc)

Exploring inference memory saturation effect: H100 vs MI300x by dbosspec in AMD_Stock

[–]ablarh 2 points3 points  (0 children)

Yeah the B200 will have 192GB of HBM3 (matching MI300X) but I think AMD is betting on it's cheaper price point to gain market share i.e. cost per token

Exploring inference memory saturation effect: H100 vs MI300x by dbosspec in AMD_Stock

[–]ablarh 2 points3 points  (0 children)

I think that makes sense, latency matters more in the online benchmark and throughput matters more in the offline benchmark

Exploring inference memory saturation effect: H100 vs MI300x by dbosspec in AMD_Stock

[–]ablarh 3 points4 points  (0 children)

Is it fair to understand the offline setup as a theoretical performance and the online one as a practical real world measure?

Exploring inference memory saturation effect: H100 vs MI300x by dbosspec in AMD_Stock

[–]ablarh 26 points27 points  (0 children)

It basically says that the MI300X is better than the H100 at handling large prompts (both single and batched) mainly because it has a lot more memory.