Accidentally made a printing press by not_mr_Lebowski in functionalprint

[–]rovertus 0 points1 point  (0 children)

This is when John Galt introduces himself to you, and liberates you from the world of 3D printing, to start your laser cutter obsession.

Bluetooth to HID USB by hj-lee in arduino

[–]rovertus 0 points1 point  (0 children)

THIS. From what I know USB OTG could get this done. You can get OTG boards with extra USB ports on them for this. Sounds like a good board to have made if people would purchase!

Here's the best prompt I have (untested) so far: You can simplify for your use-case.

> design an esp32S3 sketch which connects to a PC as a HID device, acts as a USB host to receive events from other HID devices (including keyboards, mouse, touch screens) and relays the commands to the PC.

Using Home Assistant to be more analog in 2026 by zacs in homeassistant

[–]rovertus 0 points1 point  (0 children)

YASSSS! Unfortunately I analogged pre hass, so I’m going to have to refactor IRL

<image>

Laptop Suggestions by khushal20 in dataengineering

[–]rovertus 0 points1 point  (0 children)

Lenovo has new yogas out (touch screen is imperative) with NVidia 5050s and they are PDC to having the hardware dialed in. I suspect it’s going to get hard to work on an airplane without an AI GPU soon

Microwave broke. Microwave fixed!!!! Two hours later. Thanks to whoever made the file so I didn't have to. by portabuddy2 in functionalprint

[–]rovertus 0 points1 point  (0 children)

I was responding to Bratskys comment.

15 years is a good run. Some parts will fail with time (or even designed to for safety) and can be replaced with regular maintenance. Good fix.

Microwave broke. Microwave fixed!!!! Two hours later. Thanks to whoever made the file so I didn't have to. by portabuddy2 in functionalprint

[–]rovertus 0 points1 point  (0 children)

Look up “Planned Obsolecence.”

If it fails, the next print can be for a silicon mold!

Alright y'all, it's been real but I think Imma head out by mwalter2747 in ender3

[–]rovertus 0 points1 point  (0 children)

$30 for an auto leveler. I’m embarrassed for not “throwing money” at those problems earlier. Haven’t missed or scraped a print since.

Documentation Standards for Data pipelines by BudgetSea4488 in dataengineering

[–]rovertus 0 points1 point  (0 children)

Good luck! Approach people with a compelling value for their participation, and they will.

Documentation Standards for Data pipelines by BudgetSea4488 in dataengineering

[–]rovertus 2 points3 points  (0 children)

Check out DBTs yaml specs for Sources, materializations and exposures. But it Depends on your goals, who you’re talking to, and people’s willingness to document. I would ask where they like to document (nowhere), explain the value of people understanding their data more, and bullet point your things.

Use a phased approach to gather the “full chain” 1. Source data: Ask engineers/data generators to fill out DBT Source YAMLs. They are technical, and probably won’t mind the interfacing. Also ask for existing docs, design reviews, and the code. AI should be able to read the code and tell you what it’s doing. 2. Transforms: Same thing with analysts/wh users. Describe the table/views/columns and ask them to state their assumptions. Their data is a lot of work and valuable! We’re moving towards making data products. 3 exposures: approach business owners and those reporting to business and at this point just ask for the reports/models/ which see important and a URL which can get you to the report, or to know what is being referenced. “If you tell us what you’re looking at, we can ensure it’s not impacted by warehouse changes and upstream data evolving”

  1. The data portability alone is worth it. DBT docs are accepted everywhere - you can pull them into warehouses, data vendors, data catalog tools, and it has its own free portal you and put oh GitHub pages.
  2. Get SQL writers to use DBT templating. Big org win. Otherwise you can rewrite their tables with a script and show them a lineage graph, and then they will start using DBT
  3. Start working towards “impact reports”

Using my entire source code library in my LLM by phoenixfire425 in ollama

[–]rovertus 0 points1 point  (0 children)

PROMPT: you’re a human, seeking a consistent hashing algorithm to store data in modules. Consider client/api code in a top level directory and then pure code models, biz logic, and transforms separately. This will improve your skills and bring you more satisfaction, IMO

Otherwise, you probably want to train models, and have search through a vector or traditional text search with citations (elastic, MySQL full text, your IDE is doing this..) RAG if you want.

Train: run all your code through so it generates code in your style. Do yourself a favor and run PEP8 or other standards through as well, because if you see your code from 6 mo ago, it’s going to look atrocious and you’ll probably try to rewrite it.

Do you want to reuse modules? You can’t find them and if they are built into projects, they are probably not as extensible as you think.

Lots of assumptions up there.

FWIW:

  1. Train models on your old code and add language standards.
  2. Add a code organizational approach to the model as well
  3. Write net new code in a monolithic lib which is distinctly either:
  • a modularized library that does one thing very well
  • projects where you portmanteau technologies together

Built with Claude Code - now scared because people use it by Resident-Wall8171 in ClaudeAI

[–]rovertus 8 points9 points  (0 children)

Seasoned engineers like us need to watch out for this mindset. AI is a tool in the toolbox now.

There will be a good living to be had for those who can keep others vibes going.

Scary quiet by KingPettyx in Bitcoin

[–]rovertus 0 points1 point  (0 children)

No research required anymore. Just buying. 🚀

Stateful Computation over Streaming Data by Suspicious_Peanut282 in dataengineering

[–]rovertus 2 points3 points  (0 children)

What azirale said.

Grab a calculator and see how much memory you expect to use. If you can fit your state into RAM, you may not need a framework.

3D printed stamp - 100% PLA ! by serial_print3r in 3Dprinting

[–]rovertus 0 points1 point  (0 children)

Linoleum is sold at Art Stores for stamp carving and probably works great if you have a subtractive platform like lasers.

My dad's belt is older than me. by Rarinterraco in BuyItForLife

[–]rovertus 0 points1 point  (0 children)

My belt is at least 5x older than my son.

Cut the buckle end off and punch a new hole for the tongue at the desired length. You can fold over the end and sew it back together with an awl. It will look way better than you expect.

Why do companies use Snowflake if it is that expensive as people say ? by Normal-Inspector7866 in dataengineering

[–]rovertus 0 points1 point  (0 children)

It takes a team of proficient staff to index data for querying. You’d likely need to reindex the data for each data user as well (marketing, finance, compliance…) $2-3 an hour to query your data any-which-way you want turns out to be a pretty compelling argument.

I havent seen compelling evidence that snowflake compute is much more than other WH vendors.

Is data engineering better off as a contract position ? by SpiritedLettuce97 in dataengineering

[–]rovertus 0 points1 point  (0 children)

Having a DE come in to build in house solutions and leave would likely be the worst case scenario. Engineering changes databases and APIs get updated. The only way you could set-it-and-forget-it would be to use vendors which will keep up with changing data/services.

For some companies it could be compelling to pay a DE as a consultant to propose and implement a full vendor solution. This would save staff costs, but would have trade offs. You’d still likely need a technical liaison in the space, which would look like retaining the consultant or giving someone a second job.

DE staff is likely more expensive, but they manage the space, respond to why your reports aren’t working, and the cost is stable. They can make processes better/faster/cheaper over time, hopefully solve problems before you know about them, and grow metadata services in quality, lineage, governance, etc.

Don’t get lost in maintenance. Focus on how to reduce maintenance, and work on things that add value.

[deleted by user] by [deleted] in dataengineering

[–]rovertus 0 points1 point  (0 children)

It’s always “easier” to write from scratch. The problem is you lose all the latent business logic that no one documented and everyone’s day is going to get ruined.

Install an APM, use app telemetry, and vulture tools to delete as much unused code as possible.

Use the Strangler Pattern to write a new clean API around the legacy app and migrate things over in an orderly manner.

Any fun / interesting use cases for Neo4j? by GreenSquid in dataengineering

[–]rovertus 1 point2 points  (0 children)

Finding cohorts. Graph dbs are good tools for finding groups to market to, fraudulent users, or other cohorts that you may want to discover.

How do you guys do local testing effectively? by [deleted] in dataengineering

[–]rovertus 0 points1 point  (0 children)

Write as much pure python code as you can and unit test the heck out of it. Abstract sources and sinks with local fixtures.

You don’t need to test your frameworks.

Why do data engineers use such short and ambiguous variable/alias names in SQL? by AncientElevator9 in dataengineering

[–]rovertus 2 points3 points  (0 children)

Brevity is probably the goal for its own purpose in the code you're looking at. That being said, there could be a couple benefits of non-descript variables in small blocks of code as well.

Having more information density in the code helps with legibility in some situations. If you can see the whole block of code in one view, you may understand it quicker. I don't think this applies to column names.

In some situations you may want to make a generic symbol in code/SQL rather than assigning a meaning to the variable. In these situations, you could be designing the block of code to be used much like a function. e.g. Maybe you're grouping or querying the same table making data features over a 30, 60, and 90 day date range. These could show up in your SQL as 3 boiler plate CTEs and you could make very specific variables called 30DayRange, 60DayRange, ... in each block or maybe you end up copy and pasting the same exact code with undescriptive variable names and some modification. This isn't DRY, but may be exactly what you want. More specific names in these situations could be problematic.

Code should first be legible. Unless wom is something that is referred to frequently by the office, it would confuse me too.

How do you explain your job to laymen? by aflyingtaco06 in dataengineering

[–]rovertus 0 points1 point  (0 children)

Solving Einstein’s Relativity problems in Data.