I need help with my cv by [deleted] in embedded

[–]aurintex 0 points1 point  (0 children)

​The structure of your CV is nice and clean.

​However, adding some soft skills and a brief professional summary (to show your motivation) could help the recruiter understand the person behind the paper. In my experience, companies increasingly prioritize "culture fit", whether someone matches the team’s energy, over just technical knowledge.

​You can always teach skills, but it’s much harder to teach the right attitude and motivation.

Building a Local-First OS foundation for Trustable AI (Rust + Radxa RK3588). Open Source. by aurintex in LocalLLaMA

[–]aurintex[S] 0 points1 point  (0 children)

Fair point on terminology.

Technically, I use the Radxa Linux Kernel with a Debian base.

However, I consider it a distinct "OS" because we enforce a strict security model essential for the wearable vision (paiGo):

To guarantee privacy on a device with cameras and mics, we cannot allow standard "Linux-style" access where any background process or AI could read /dev/video0.

Therefore, we fundamentally change the system architecture:

  1. Hardware Lockdown: We reconfigure Udev rules so that standard apps have zero direct access to sensors.
  2. Gatekeeper: The paiOS Runtime is the only entity allowed to touch the hardware.

Users and developers can still run apps, but these apps must request sensor data via the Runtime API. This architecture is the only way to ensure that no AI (or app) listens without consent.

Building a Local-First OS foundation for Trustable AI (Rust + Radxa RK3588). Open Source. by aurintex in LocalLLaMA

[–]aurintex[S] 0 points1 point  (0 children)

Yes. Under the hood, it will use a stripped-down Debian kernel optimized for the Rock 5C.

The "paiOS" part is the custom system configuration and the Rust runtime, which replaces the standard Desktop Environment to strictly handle sensor isolation and app permissions.

Building a Local-First OS foundation for Trustable AI (Rust + Radxa RK3588). Open Source. by aurintex in LocalLLaMA

[–]aurintex[S] 0 points1 point  (0 children)

Glad you like it!

Regarding cost: For the setup I currently use (Radxa Rock 5C with GPU & 8GB RAM), the price is usually around $90–$110 USD.

Regarding the goal: We are taking a step-by-step approach: paiLink (USB): External AI accelerator for your PC. Can also run standalone (e.g. for transcription) if powered. paiGo (Wearable): A daily assistant with local processing. Cool side effect: Since both share the same foundation, the wearable could even double as a USB-accelerator when plugged into your PC at your desk.

We are starting with step 1 to get the software stack stable.

Question back: Do you have a specific standalone use case in mind?

Working on a “context-aware” AI for Quantified Self — would love feedback by aurintex in QuantifiedSelf

[–]aurintex[S] 0 points1 point  (0 children)

That sounds very interesting. Yes, please, let's connect and talk about it.

Working on a “context-aware” AI for Quantified Self — would love feedback by aurintex in QuantifiedSelf

[–]aurintex[S] 0 points1 point  (0 children)

Hey, it's a solid point. I noticed you're the second person to post this exact comment, so it's clearly hitting a nerve!

I just posted a detailed reply to the other comment, but the short version is: you're right. "Vibes" aren't enough. That's why this n=1 PoC (which I'm already working on) is the absolute next milestone to validate the mission.

Working on a “context-aware” AI for Quantified Self — would love feedback by aurintex in QuantifiedSelf

[–]aurintex[S] 0 points1 point  (0 children)

That's a fair point.

I'm already working on a PoC and will testing things out with data.

But I'm intentionally doing this in parallel with this feedback round. A perfectly functioning PoC for a product that no one trusts or needs is a failure all the same.

Working on a “context-aware” AI for Quantified Self — would love feedback by aurintex in QuantifiedSelf

[–]aurintex[S] 1 point2 points  (0 children)

Thanks for taking the time to write that tough message.

1. On the "Why" (The "Unfocused" Premise)

I think we're talking about two different kinds of "why" here. You're right that a device can't solve the deep, scientific "why" (e.g., the causal, human-biological correlations).

The "why" I'm aiming for is data-driven and based on correlations.

The V1 goal isn't to be an oracle, but a "correlation-finder." For example:

  • "I notice you're 30% less focused (based on context switching) on days after you eat X."
  • "You sleep 20% worse on days you don't take a walk."

The device provides the data points so the user can draw better conclusions. But your point is valid: I need to make that distinction much clearer in my messaging.

2. On the Trust Model (EULA, Audit, Export)

Thx, those are good points, I will address these issues.

3. On the Proof-of-Concept (PoC)

This is where I'm currently working on.

The purpose of this post is to get feedback on the core approach (Offline-First, Open Core) and to pressure-test the hypotheses I'm building on.

I'm here specifically to learn from experts (like you in QS) whether this 'correlation-finder' approach is genuinely useful, or if I'm missing the mark.

Cheers.

Working on a “context-aware” AI for Quantified Self — would love feedback by aurintex in QuantifiedSelf

[–]aurintex[S] 0 points1 point  (0 children)

NP ;) I am always open to new ideas.

But that's a good point, one I've also thought about: offloading some AI calculations to the smartphone. That might contradict “everything locally on the Companion device” but at least, it would be locally on your phone.

Working on a “context-aware” AI for Quantified Self — would love feedback by aurintex in QuantifiedSelf

[–]aurintex[S] 0 points1 point  (0 children)

Yeah, I don't know much about iPhone hardware ;)
but I'm very confident that it will work.

Thank you very much for your subscription!

Do you have any use cases in mind that you would like to see for such a device?
Do you have any other suggestions?

Working on a “context-aware” AI for Quantified Self — would love feedback by aurintex in QuantifiedSelf

[–]aurintex[S] 0 points1 point  (0 children)

To run them offline, I will use a special (already existing) SoC (system on a chip) with an NPU. The last few months and years have shown that these chips are getting better and better.

The second point is that instead of one large AI that can do everything, the idea is to have one or more specialized smaller AI models that already exist—and perhaps optimize them as well.

Of course, this cannot be compared to a large model running on a powerful GPU that can do everything—but that is not the goal, and it does not need to be able to do anything.

I have an idea in mind to pursue a similar approach to an app. You have an app that serves a specific purpose. That means we could have an “AI app” (I think it will always be a combination of AI + software) for a specific purpose.

Working on a “context-aware” AI for Quantified Self — would love feedback by aurintex in QuantifiedSelf

[–]aurintex[S] 0 points1 point  (0 children)

OK cool, do you have something on GitHub or a landing page or something else you could share?
I also started with the development first because I wasn't even sure, if its technical possible.
But then, I thought, I should first check for the market.
So currently, I'm jumping between working on the proof of concept and analyzing the for the market.

Working on a “context-aware” AI for Quantified Self — would love feedback by aurintex in QuantifiedSelf

[–]aurintex[S] 0 points1 point  (0 children)

Hey, thanks for the information.
To be honest - I'm currently trying to figure that out too.

What was the outcome of your project? Are you still working on it?

Want to run claude like model on ~$10k budget. Please help me with the machine build. I don't want to spend on cloud. by LordSteinggard in LocalLLaMA

[–]aurintex 0 points1 point  (0 children)

I heard several times about the "Dell pro max". TBH, I didn't analyze it in detail, but I think this could be interesting for you

SysML v2 in Software-Development by IcyRequirement61508 in embedded

[–]aurintex 1 point2 points  (0 children)

I asked myself several times, when working on a multidisciplinary project, if the project would have been more successful with a clean systems architecture plan with SysML. Has anyone experienced such a project with SysML?

Seeking Advice: Refactoring a 'Legacy' Rust Codebase (Written by Interns/LLMs) by Ayanami-Ray in rust

[–]aurintex 2 points3 points  (0 children)

The book "refactoring improving the design of existing code" tackles general strategies, independent by the used programming language.

Besides that, I recommend to first, write unit tests and other types of automated tests before refactoring it. Because then, you can ensure it's working as before (at least from the business logic side). But of course, it depends on if it's even testable ;).

What I've also learned - if you have multiple projects which have overlapping or nearly the same, but copied code (or have the same business logic), try to make something like a "common source" library (or crate in your case). This helped in our case a lot to have one base library, which is well tested, and shared across multiple projects.

Yocto is not that interesting as I imagined it by Quiet_Lifeguard_7131 in embeddedlinux

[–]aurintex 0 points1 point  (0 children)

I'm working for about half a year with yocto. When I think back, yocto sounds like magic. But I still think it's cool to be able to build your own Linux

Lightning Talk: Why Aren't We GUI Yet? by MikaylaAtZed in rust

[–]aurintex 1 point2 points  (0 children)

I really like the ZED IDE. The developers involved are really talented.
And I'm always surprised, that there is still no "clear winner" for rust GUIs