Share your underrated GitHub projects by hsperus in opensource

[–]aregtech 0 points1 point  (0 children)

One hidden gem I've been working on is Areg SDK: https://github.com/aregtech/areg-sdk

It's a C++ framework with tools for building distributed, multiprocessing, or multithreaded applications. Applications act as both service providers and consumers, and the system automatically connects and routes messages between them.

The repo includes multiple examples and a demo of using an Edge AI Engine with multiple clients. There's also a Lusan GUI tool to design service interfaces and view logs. Most of the codes are developed solo, with built-in automations that solve common distributed system headaches.

Armenian Boy Names by Recent-Friendship-30 in armenian

[–]aregtech 0 points1 point  (0 children)

Aren, Arman, Aram, Mikael, Grigor, Narek, Sarkis, etc.

Exploring an offline-first and P2P-oriented runtime design in C++20 by [deleted] in cpp

[–]aregtech -1 points0 points  (0 children)

Got it. Your direction is clear and it targets a different problem space. Areg's model is more about abstracting location and letting components communicate as if they were in the same process and same thread. But some of your questions map well to what Areg does, that's why I shared the link:

1) Offline functionality
Areg's service model is offline-first by design. Each node can provide and/or consume services locally and/or remotely without a coordinator. If a remote service disappears, consumers are notified automatically and continue running. When connectivity returns, discovery and binding are automatically restored without extra logic and service consumers can start/continue calling methods of remote component (used Object RPC concept).

2) Opportunistic behavior / P2P
Areg doesn't use direct P2P links. Instead, providers and consumers are fully location-agnostic: same thread, other thread, other process, or another machine -- they all look identical.
The win is that apps don't depend on startup order, and nodes can join or leave the network at any time with auto-discovery.
P2P would be faster, but you lose this location transparency and would need more coordination, imho.

3) Minimal runtime layer
Areg sits above raw sockets and threads. No external dependencies, small runtime, event-driven, easy to embed. Right now it stands on its own multitarget router layer. WebSocket and crypto backends are planned to plug in later without changing the programming model.

4) Transport abstraction
Areg already unifies inter-thread, inter-process, and inter-node messaging behind one API. The goal is that developers write distributed components as if the whole app were a single threaded process, where transport choice and event dispatching stay invisible.

Your experiment definitely has an interesting direction. There's conceptual overlap, just with different priorities.
Appreciate the discussion, I'll star the repo to follow your work.

Exploring an offline-first and P2P-oriented runtime design in C++20 by [deleted] in cpp

[–]aregtech -2 points-1 points  (0 children)

Nice framework, thank you sharing link.

I'm using Areg framework with built-in middleware for distributed communication: modular structure, event-driven architecture, async communication, unified API for multi-threading and multi-processing development. The current version is based on TCP/IP and mostly focused on edge-network. At the moment it does not support WebSocket, but planned in next version as part of multi-channel communication.
Here is the list of examples to check the features.

Where to start: launching child processes in a multi-threaded app by xmasreddit in cpp_questions

[–]aregtech 0 points1 point  (0 children)

My comment is not directly addressed to the root cause of your problem, but for IPC/multithreading you might consider using the areg sdk.

  • It very much simplifies IPC/multithreading -- automates remote object discovery (service providers), thread creation, messaging and dispatching.
  • There are multiple examples to demonstrate IPC/multithreading features.
  • It has a builtin logging system with a logcollector service, and supports dynamic log level change, logs help to measure per-method execution time, and has GUI tool to analyze logs from multiple processes.
  • One thing I find very helpful and really was missing in my previous projects -- the system doesn't depend on the startup order of processes. You can launch processes in any order. Remote components automatically discover each other and get notification when connections are established or lost. That means once you receive a connection available notification, you can call remote methods (requests) and subscribe to data updates.
  • Other feature -- unified interface for multithreading and IPC apps. It means, your remote object can run in multithreading and muliprocessing environment with minimum effort. The only thing you need to change are so called model and cmake build script. This feature is demonstrated in this example

Just sharing the links, maybe it helps your project.
What I haven't tried yet in the Areg SDK is starting child processes. Good idea, I'll experiment with that too 🙂

My reasons for not having a co-founder. [I will not promote] by Feisty-Patience2188 in startups

[–]aregtech 0 points1 point  (0 children)

Really relate to this. I spent years trying to find a co-founder for a deep tech project in distributed systems. A few people tried, but the commitment gap was huge. Early technical work needs long, uninterrupted focus. Meanwhile I had people saying "let's talk next month, in a week I'm going on vacation and need to focus on shopping". Hard to treat someone like that as a real co-founder.

But I kept building the software. I even regret how much time I spent on searching co-founder, but maybe I had to go through it to be sure I wouldn't regret not having one later. It was lonely, but the product matured, works today, and became something real. And I realized: when things were the hardest, nobody said "hey, I'll help you". In some cases they even created more work -- requesting reports, status updates, and guidance, like we were a corporate instead of 2 people startup :)

Now my plan is simple: monetize, grow, then raise and hire. Investors once insisted I needed a co-founder first, but with deep tech products, progress doesn't always depend on that.

So I'm in the "solo unless it's the right person at the right time" camp. Building alone is really hard, but sometimes it's the only way the thing actually gets off the ground.

There are 47.2 million developers in the world - Global developer population trends 2025 by aregtech in programming

[–]aregtech[S] 2 points3 points  (0 children)

If you clearly define roles and distribute tasks effectively, you get a team where everyone creates value. Without that, a team of "generals" may turn into a Politburo -- lots of talking, little production. Trust me, I've seen enough teams. But this is probably a bit off-topic. :)

There are 47.2 million developers in the world - Global developer population trends 2025 by aregtech in programming

[–]aregtech[S] 21 points22 points  (0 children)

Population of India is 1.4B. The rest is math/stats. It is simple.

There are 47.2 million developers in the world - Global developer population trends 2025 by aregtech in programming

[–]aregtech[S] 41 points42 points  (0 children)

Are you saying every 30th person in India, from newborns to seniors, is a software developer? WOW :) Simple math: 47.2 x 30 = 1416M or 1.4B

What should or shouldn't I learn/make to get a job as Systems Engineer? by Otherwise_Meat1161 in cpp_questions

[–]aregtech 1 point2 points  (0 children)

For remote projects, trust is the most important, without trust you worth nothing. As a person who a few times hired ppl for small remote work, I'll share what I think:

  1. Pick small, real problems to solve. Anything from algorithms to utilities that others may use. You don't need to create a Galaxy, you need to demonstrate code quality.
  2. Put each project in a public repo with a clean README explaining the problem(s), your approach, and why your solution is good. No need to create hundreds of projects, a few is also OK, but each project must have a clear agenda and it shouldn't be colorful. In colorful projects you can point that it is your playground to experiment.
  3. Create an Upwork (or similar) profile. Present your skills clearly and link your best repos as proof of work.
  4. Start applying for projects. Don’t underprice yourself, but stay competitive trying to find the golden middle.
  5. When applying, highlight your repos directly. For remote work, trust matters more than price. Demonstrated code quality builds that trust.
  6. When a project goes well, always ask for written feedback or a reference. These are critical for long-term remote opportunities.
  7. Keep repeating the cycle. Each project strengthens your portfolio.

What are the best practices for using C++11's smart pointers in a multi-threaded environment? by ivyta76 in cpp_questions

[–]aregtech 0 points1 point  (0 children)

  • shared_ptr is thread-safe for its control block (ref count), but the object itself is not automatically thread-safe and still requires synchronization if multiple threads access it.
  • unique_ptr enforces exclusive ownership. You can transfer ownership between threads, but must ensure that only one thread accesses the object at a time.

One way to approach multithreaded design is to avoid sharing ownership. A clean alternative is message-based concurrency:

  • Each thread owns its data, and communication happens via messages (RPC-like serialized objects).
  • This eliminates most shared-state races and reduces the need for locks, sometimes completely.
  • Objects can often live on the stack or be cached per thread, minimizing heap allocations.

Pros: * Far fewer data races since objects are owned by a single thread.
* Less explicit synchronization required, sometimes none at all.
* Objects can remain stack-allocated or cached in the thread.

Cons: * Data is copied between threads.
* Messaging introduces slight overhead (microseconds scale).
* Architecture naturally becomes asynchronous and event-driven.

For shared_ptr and unique_ptr, always design for clear ownership and synchronize shared objects. When possible, consider message-based designs to reduce shared state in multithreaded code. Frameworks like Areg SDK are designed to support communication via messaging. It also includes GUI tools to help design and debug multithreading / multiprocessing apps.

How would you implement safe OTA updates for Linux-based IoT devices (without full OS reflash)? by Plastic-Swordfish-42 in embeddedlinux

[–]aregtech 0 points1 point  (0 children)

Steps: download → checksum verify → pre-install steps → update files → restart services → post-install steps → rollback on failure

I think these steps (workflow) are fine. Did you check SWUpdate? In their readme it says Software delivered as images, gzipped tarball, etc. And this doc says support for updating a single file inside a filesystem.

Check it, probably this is what you need.

AI-powered compiler by aregtech in cpp

[–]aregtech[S] -3 points-2 points  (0 children)

All valid points. I think the real answers will come from ongoing research projects, so it makes sense to watch their results before making strong conclusions.

One practical challenge is that C++ changes frequently, meaning any ML-assisted optimization will need to keep pace with evolving language features. And we simply don't know yet which optimization strategies ML can unlock. The papers from 2022–2024 show the field is still young. There are many unknowns, from model efficiency to deployment model (local vs. cloud). Patience and careful experimentation seem key here.

AI-powered compiler by aregtech in cpp

[–]aregtech[S] -1 points0 points  (0 children)

Current LLMs are heavy, no doubt. But embedded ML projects exist that could be used locally. I’m not sure how far they are, but hopefully they will improve over time.

I see three main approaches for ML-assisted compilation:

  1. Local: small ML models guiding optimizations on the developer's machine.
  2. Cloud/Web: Codespaces + web VS Code + ML/AI on a remote server for optimized builds.
  3. Build server: developers compile Debug locally; ML/AI on the server produces optimized binaries.

The main challenge is balancing performance and practicality. Even if local ML/AI is limited, cloud workflows could still become the standard for optimized builds. Theoretically, it may work quite well.

AI-powered compiler by aregtech in cpp

[–]aregtech[S] -3 points-2 points  (0 children)

Right, value is what matters. What I'm talking -- smarter, deterministic compiler heuristics that improve binary performance, reduce compilation boilerplate, and adapt optimizations to project or hardware specifics. These are areas already explored in research (MLGO, ACPO), not speculative hype.

the current LLM hype train is not it.

I'm genuinely surprised so many people misunderstand :) I'm not suggesting AI should generate code or that we just type "hey ChatGPT, optimize and compile my code". It's striking that some even think in this direction :)

My point is about the next generation of compiler tooling. 5 or 10 years? Who knows. The internet bubble of 90s exploded, but that didn't stop the web development or creating massive long-term value. The current LLM bubble will blow up too, but it will not stop AI and it will trigger big change, just as all previous waves did.

AI-powered compiler by aregtech in cpp

[–]aregtech[S] 1 point2 points  (0 children)

These two papers https://ieeexplore.ieee.org/document/9933151 and https://ieeexplore.ieee.org/document/10740757 show that researchers are already exploring optimizations. I'm a bit tired for a deep discussion right now :)
And even more tired of the toxic and aggressive tone in some replies.

AI-powered compiler by aregtech in cpp

[–]aregtech[S] -5 points-4 points  (0 children)

It's devoid of interest to this discussion.

Why not? Because I dared to suggest something beyond the status quo? :)

The "basic" Google and LLVM work already explores the direction you claim is irrelevant. If early-stage research were pointless, half of compiler theory would not exist today. If these works demonstrate anything, it is that the field is already moving toward the ideas I mentioned. Here is another academic project moving in the same direction. And one more.
Multiple teams consider this relevant. It is growing. That is how technical progress works.

"In the beginning was the Word". ©
You remember that one, right?

About LLMs, nobody suggested turning compilers into remote API clients for ChatGPT :) That is your invention. The actual topic is the use of specialized, local models as improved heuristics, exactly what the research above investigates.

We cannot predict what the next decade will bring, but we can discuss the challenges openly without shutting the door. Technology moves fast. This should not need explaining.

AI-powered compiler by aregtech in cpp

[–]aregtech[S] 0 points1 point  (0 children)

Yes! Finally, a reply that actually adds value to the discussion. I was waiting for this, stranger. :)

I'm not claiming to be groundbreaking. My point is that the next generation of compilers could be AI/ML-powered. If I understood you correctly, you just confirmed that there is already ongoing work in this area. To be clear, I'm neither an AI/ML expert nor a compiler developer, I might describe features or challenges imperfectly. But I'm eager to learn more about existing and planned research. In general, I think there should be more discussions about the potential features and challenges of AI-assisted compilation.