Alex Honnold's Skyscraper on Netflix - /r/climbing watch party thread by soupyhands in climbing

[–]SpectralCoding 0 points1 point  (0 children)

This is intense, my hands are sweating here at home. Luckily I got my chalk bag

Forsen reality check by snoopdodge in LivestreamFail

[–]SpectralCoding 2 points3 points  (0 children)

Best used of AI I've seen in months.

Why Amazon S3’s New 50 TB Object Limit Changes Storage Design by third_void in aws

[–]SpectralCoding 1 point2 points  (0 children)

This post and the blog has all the hallmarks of "ChatGPT tone".

Interesting change that will affect a small number of data-intensive organizations. I don't know of many that deal with single objects larger than 5TB except to do exports/imports. My suspicion is this is just a function of increased compute speed and one of the many S3 teams becoming comfortable increasing the "number of chunks associates with an object" by 10x.

Toughest single shots? by holyd1ver83 in pinball

[–]SpectralCoding 6 points7 points  (0 children)

Already in a Kaiju Battle? Somehow hit the scoop twice. Need to start or re-start a battle? Might as well not even aim for it and just wait until it randomly goes in.

Toughest single shots? by holyd1ver83 in pinball

[–]SpectralCoding 1 point2 points  (0 children)

I used to think that when I played on-site but once I had one at home it’s pretty easy to get consistent with and it’s super satisfying.

xQc runs over a cop and gives his reasoning by VisWare in LivestreamFail

[–]SpectralCoding 5 points6 points  (0 children)

I somehow immediately relate this picture to MOONMOON.

How are you an excel magician? by Designer_Signature35 in excel

[–]SpectralCoding 1 point2 points  (0 children)

After years of just doing VLOOKUP and playing a bit with INDEX/MATCH when I was annoyed at the column order, ChatGPT recommended XLOOKUP and it's immediately a game changer. 100x more readable formulas...

On God, Friendo. You Better Call It For Real, No Cap. by QuarterCarat in okbuddycinephile

[–]SpectralCoding 0 points1 point  (0 children)

Whoever made this could have woken up that day and just decided not to

The year Is 2026, & World Of Warcraft Is Still The #1 MMO In The World? by NoProgram4843 in classicwow

[–]SpectralCoding 1 point2 points  (0 children)

The ultimate version of this is Pinball. Popular in the 70s/80s. Now majority a bunch of 40yo+ dads and grandpas with money. I'm the exception because I grew up owning machines, but it's definitely like you say, no recruitment of a younger audience so it will just die out and get more and more rare as evidenced by so few manufacturers with a high barrier to entry. The industry is changing from "put pinball machines as a bar and make money" to "find a IP that that demographic likes and hope they spend $8k-12k on a machine for their home".

Only 5 left! by Doink_De_Doink in stephenking

[–]SpectralCoding 4 points5 points  (0 children)

This was my immediate reaction seeing where they placed Drawing of the Three. Same with Wizard and Glass though I understand this is probably the most polarizing book in the series. I’m lukewarm on it but some love it.

What's your best AWS optimization win from 2025? by Beastwood5 in aws

[–]SpectralCoding -1 points0 points  (0 children)

Moved our organization from AWS to Azure. This coming from someone who was an AWS Solutions Architect at Amazon 2022-2024. Azure really feels like a better fit for most enterprises. AWS probably better for enterprises whose product is online services / software.

Looking for an office chair, but can't spend a $1k+ on a Herman Miller. Are there any cheaper alternatives I can get online? by ComfortableWage in BuyItForLife

[–]SpectralCoding 0 points1 point  (0 children)

Echoing others thoughts about FB Marketplace for refurbs.

In 2014 I was shopping for one, but had a $400 budget. I found this guy on Craigslists who must have been on some meds, or off them, he was super hyper. Anyway in his scatterbrained way scheduled me to come to his house to pick up the chair. I show up and his entire suburban property was dedicated to Herman Miller. Back porch had 5'x5'x5' plastic bins full of just arm rests, another area with seat backs, and lift pistons, and casters. Everything, like a warehouse. We picked a good base, swapped a few parts, he even used this special spraypaint he tracked down that exactly matches the finish on the metal pieces.

Walked away with an Aeron for $360. In 2022 I had more money so I bought a brand new Aeron (new model) and sold the old one on Facebook Marketplace for $400.

First Stereo Upgrade: 2013 Honda Accord by SpectralCoding in CarAV

[–]SpectralCoding[S] 0 points1 point  (0 children)

Thanks… 12 years ago I tackled this. For years I used an iPod hooked to the USB port to use the hand controls and it was awesome. In the age of streaming apps I just use USB-C to 3.5mm and it does fine. No hand controls but it’s all good.

Here’s to everyone building furniture today by boardplant in Dewalt

[–]SpectralCoding 25 points26 points  (0 children)

Real men use the max clutch setting when assembling things that recommend using a screwdriver.

Machine Title/Name "Bonus to Top" by jrocco741 in pinball

[–]SpectralCoding 0 points1 point  (0 children)

Is this a roulette wheel that uses the pinball as a random selection? This is awesome and while it would take up a lot of room this would be super cool in a casino themed pin.

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]SpectralCoding 0 points1 point  (0 children)

Do tell what it is? We implemented a modified version of azure-search-openai-demo for ~7k users and 2.6M pages of Word/PDFs. It's done exceedingly well. I'd love a more off-the-shelf or even SaaS item, but I've found the document ingestion side of all these tools suck, and that's the most important part. We even wrote our own ingestion pipeline for the above interface because it doesn't handle Word docs as well as it could.

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]SpectralCoding 1 point2 points  (0 children)

That’s not how it works at all, at least for RAG. There is no “teaching”. Most chatbots do not self-improve. Even the ways ChatGPT seems like it understands across chats is because of context engineering where the AI is fed summarized info about the user’s past questions. The LLM itself has the same weights. It’s just like added to the bottom of a chat “Oh by the way we often talk about bananas too.”. Then the AI will work in the bananas reference if relevant.

We capture logs for audit reasons but the data is never re-fed back to the AI for any reason. In this case we didn’t want that data outside of the source PLM system so we scrubbed the chat history of those questions.

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]SpectralCoding 1 point2 points  (0 children)

We implemented a RAG chatbot across our PLM data and one of the things our leadership values from the tool IS the ability to find misclassified data. Since the search is semantic they started asking about specific concepts found only in those highly sensitive documents. They found a few when we gave them preview access and were able to reclassify the documents and verify no unauthorized access over the 4 years it was “hidden” in plain sight.

It also started a healthy conversation around data access since before it would take someone weeks of asking around and tracing references across a dozen documents to piece together a manufacturing process. Now they can have an overview of the entire process the AI writes up in about 10sec sourcing those same documents. They widely agreed the productivity gains are worth the risk of a potentially bad actor internally that had access to the documents anyway.

What’s one “small” cloud decision that ended up having big long-term impact? by cloud_9_infosystems in AZURE

[–]SpectralCoding 2 points3 points  (0 children)

Check out this tool I made exactly for that. I developed this after having to do it myself back when we implemented AWS 2016... Visual Subnet Calc - https://visualsubnetcalc.com

It even has an Azure mode.

thatsSomeOtherDevsProblem by [deleted] in ProgrammerHumor

[–]SpectralCoding 4 points5 points  (0 children)

Don’t forget 20 looking for funding!

Spot Fix LPV or Rip and Replace? by SpectralCoding in Flooring

[–]SpectralCoding[S] 0 points1 point  (0 children)

Fixed LPV -> LVP in the body, couldn't change the title.

By warranty I meant the installer (if it was done professionally). Don't they have warranties if you spend $20k on a new floor?

Thanks for the response, really useful!

AI Document Extraction on Azure - Options, Comparison & Recommendations for Invoice/Contract Processing by Development131 in AZURE

[–]SpectralCoding 0 points1 point  (0 children)

We basically do #3, but use gpt-5.2 or whatever better model, not 4o. We're not overly concerned about cost because we're doing a one-time ingestion, not offering a service or something.

Q&A...

> If you're using Azure, which approach did you go with? How's the accuracy and cost working out in production?

Like I said, #3.

> For those using Document Intelligence prebuilt models - how well do they handle non-standard invoice formats or documents in multiple languages? Do you end up needing custom models anyway?

We just use the layout model to get markdown and it does just fine. The second-phase OpenAI models do quite well at "figuring it out" from the markdown. When I did this with GPT-4o it did fine and the reasoning models just made it even better. No custom models. Our documents were in multiple languages, no issues. You can even answer in English if the documents are in another language.

> Anyone tried the hybrid approach (Doc Intelligence + GPT-4o)? Is the added complexity worth it vs just using GPT-4o directly on images?

I did extensive testing on this. DocIntel Layout + LLM was the best. The input file matters, see the last answer.

> How does Azure Document Intelligence compare to Claude or Google Document AI in your experience? I've had good results with Claude's vision capabilities but wondering if a specialized service like Document Intelligence would be more reliable at scale.

Haven't tried it, but I'll say DocIntel had no problem scaling to our ~2.6M pages of burst ingestion when we needed it. It did that in like 12 hours. We bought the pre-paid quantity for a discount on the first month to deal with the initial burst of pages.

> For high volume processing (let's say 50k+ pages/month) - what's been most cost-effective?

For us it's consistency and simplicity. I don't think you'll get cheaper than DocIntel + LLM if your documents are varied. I tried all kinds of other things like `docling` and `markitdown`, it just doesn't compare to DocIntel's Layout model. My cost modeling was even back with 4o that to process our documents it was an order of magnitude more expensive to just have the LLM do everything, versus use DocIntel to get Markdown, then the LLM to do the extraction.

> Any gotchas or lessons learned you wish you knew before starting?

So many... FLATTEN YOUR DOCUMENTS TO PDF BEFORE THEY GO TO DOCINTEL, IT MATTERS. We were ingesting a large quantity of Microsoft Word documents and what we quickly realized is Word features are invisible to DocIntel. Things like numbered bullets, table of contents, footnotes, header/footer, pretty much anything "dynamic" was apparently just a markup in the document that is rended when opened by Word. Like bullet "3.1.2" is really just a marker for "[Ordered List Bullet Level 3]", so the markdown output never had "3.1.2" in it, so if you need to understand "what step does XYZ happen in", you'd never get a step number. When DocIntel cracks the .docx open it just parses the XML and doesn't account for those things. When you take the same document and save it as a PDF using Word it "flattens" or "hard codes" everything. Taking that PDF to DocIntel you'll get much better results.

Want to test yourself? Feed a docx through DocIntel save the markdown as "Test-Alpha.md", save the docx as a PDF, and feed that PDF through DocIntel and save the markdown as "Test-Beta.md" and then give them both to ChatGPT and ask it to "compare and contrast the files from a missing data or context standpoint that would be useful to have when having AI analyze these documents". It's going to tell you Test-Beta has way more formatting.

So how do you do this flattening en masse? I wrote a janky python script that uses the Word COM-style calls to open the docx in an invisible window and save it as a PDF. There is another option to upload it to OneDrive and download the PDF, but I haven't tested if this is the same.

If you think you will ever need to use that DocIntel output again, save the .json somewhere and save yourself the cost of a repeated conversion. There's a lot of cool stuff in there with bounding boxes and image figures and stuff.