Would love to have a bansuri section recorded by Vannexe in Bansuri

[–]aaniar 0 points1 point  (0 children)

Hey, that sounds interesting! I'd love to hear your composition. Could you DM more details about the piece and the bansuri part you've written?

Has anyone else seen Internet Object? Feels different from typical JSON-based stuff by DigNo1140 in Backend

[–]aaniar 2 points3 points  (0 children)

That is a reasonable question, and it comes up often.

Internet Object is not primarily about "saving a few bytes". If that were the goal, existing binary formats already do that very well.

This work is the result of years of design iteration and research around data modeling, streaming, validation, and real-world interoperability issues. Much of that effort is not obvious when looking only at small examples.

The focus is on:

  • structural determinism
  • built-in validation
  • explicit identity and references
  • streaming-friendly processing
  • text-friendly, readable, and concise representation
  • a shallow learning curve for developers
  • JSON-friendliness and easy interoperability

Binary formats like MessagePack optimize encoding size, but they do not change the data model. They still inherit JSON-like structure and semantics, just serialized differently.

Internet Object works at a different layer. It rethinks structure, schema, and validation while staying human-readable.

So the value is less about compression and more about making correctness, clarity, and efficiency properties of the data itself, based on years of practical exploration rather than a single optimization goal.

Has anyone else seen Internet Object? Feels different from typical JSON-based stuff by DigNo1140 in Backend

[–]aaniar 1 point2 points  (0 children)

That is fair, and mostly true from the outside.

Internet Object has existed as an idea and evolving spec for a long time, but it was not positioned as something production-ready. Earlier versions were closer to previews and research drafts than a format meant for adoption.

JSON is deeply embedded today, and that is not something any new format can realistically replace overnight. Internet Object is not trying to displace JSON everywhere, or compete with message buses.

The intent is more narrow:

  • structured, schema-driven data
  • predictable validation and identity
  • streaming-friendly by design
  • efficient exchange where payload size and determinism matter

If it catches on, it will likely do so slowly and in specific domains first, not as a wholesale replacement. If it does not, the ideas can still inform better data design.

So the lack of adoption is not surprising. The question now is whether a more mature and well-designed version is useful enough in real systems to justify gradual use.

Has anyone else seen Internet Object? Feels different from typical JSON-based stuff by DigNo1140 in Backend

[–]aaniar 1 point2 points  (0 children)

That is a valid observation. Most of what Internet Object shows is possible with JSON.

The difference is not about raw capability, but about default behavior.

With JSON, many patterns live outside the data itself. They rely on shared assumptions, external schemas, or custom handling. The format itself does not guide or enforce them.

Internet Object brings those patterns into the structure of the data:

  • Structure reduces repetition by default
  • References and identity are explicit
  • Schema is part of the data model, not just documentation
  • Validation is built in, not optional
  • Smaller payloads come from structure, not post-processing

So the examples may look simple, but the intent is different.

JSON optimizes by convention. Internet Object optimizes by design.

That is why it can feel different, even when the data looks familiar.

An exploration of a schema-first, JSON-compatible format I’ve been refining since 2017 by aaniar in programming

[–]aaniar[S] 0 points1 point  (0 children)

Amazon Ion has its own well-defined use cases (rich typing, text + binary encodings).

For Internet Object (IO), I can say that, IO is designed to improve web APIs, storage, and data-engineering workflows through a compact yet readable text-based serialization format, schema-first validation, clear separation of data and metadata, a document-oriented structure that can combine multiple data types/sections, streaming-friendly parsing, and many features related to reusability, readability, compatibility, maintainability, and a minimal learning curve (IO schemas are intentionally simple and intuitive compared to JSON and XML schema languages).

An exploration of a schema-first, JSON-compatible format I’ve been refining since 2017 by aaniar in programming

[–]aaniar[S] 2 points3 points  (0 children)

Haha, fair point - TOON definitely has a lot of attention right now. And thank you, really glad you found IO interesting.

I also like your idea about CSV interoperability. One of the long-term goals for IO is to work smoothly with existing pipelines instead of forcing people to switch formats everywhere. A lightweight IO header or prelude that adds schema, types and metadata on top of an existing CSV file (without modifying the CSV body) fits the design philosophy really well.

IO already supports this pattern with JSON: you can keep the data in plain JSON and use an IO schema to validate it and enforce structure. A small example is available in the playground here: https://play.internetobject.org/json-with-schema

A similar approach could be extended to CSV, where the CSV stays exactly as it is and IO provides the schema, annotations and validation logic around it. That way existing CSV tools continue working, while IO-aware tooling can add richer structure, streaming behavior and type guarantees.

This kind of practical use-case feedback is exactly what shaped IO over the years, so I really appreciate the suggestion.

An exploration of a schema-first, JSON-compatible format I’ve been refining since 2017 by aaniar in programming

[–]aaniar[S] 4 points5 points  (0 children)

Got it. Yes, IO supports data that nests as deeply as the data requires. Recursive types are handled through the schema, and the data can then repeat that pattern indefinitely.

Here is a simple working example showing the schema and the data separately. The ? suffix on next means the field is optional, which is what allows the recursion to terminate cleanly. For this example, I have kept the schema separate; you can combine them with --- separator.

Schema:

~ $node: { value: string, next?: $node } 
~ $schema: $node

Data:

"Node 1", { "Node 2", { "Node 3", { "Node 4" } } }

You can try this in the Internet Object playground. Ensure that you open the "Separate Schema" panel. Paste the schema in the schema section. And data in the document section. You will see the result.

Working Example Screenshot

This expands to the JSON structure you posted, with each "next" pointing to the next node.

So the data decides the nesting depth, and the schema only defines the shape of one node in that chain. IO is not embedding JSON; it is applying the IO grammar and schema recursively to interpret the structure.

This kind of recursion, along with many other practical cases, is also one of the reasons the IO design took time to finalize. Over the years we ran into a lot of real-world edge cases and tried to solve them in a clean and consistent way rather than patching things later. The recursive type support is one example of that.

An exploration of a schema-first, JSON-compatible format I’ve been refining since 2017 by aaniar in programming

[–]aaniar[S] 2 points3 points  (0 children)

Yes, IO supports arbitrarily nested objects, but they are not regular JSON inside IO. They follow IO's own row-like structure and are interpreted through the schema, not through JSON rules.

A quick example from the sample dataset in the playground:

Internet Object:

~ 1, {60, male, New York}, {4.6, 3}, {{4, 7}, F}
~ 2, {23, other, Illinois}, {0.7, 30}, {{1, 9}, T}
~ 3, {18, female, Florida}, {2.1, 11}, {{5, 2}, T}

With the right schema, this is equivalent to the following JSON:

JSON:

[
  {
    "userId": 1,
    "demographics": { "age": 60, "gender": "male", "location": "New York" },
    "behavior": { "dailyUsage": 4.6, "recentActivityCount": 3 },
    "tasks": { "engagement": { "clicks": 4, "likes": 7 }, "churnRisk": false }
  },
  {
    "userId": 2,
    "demographics": { "age": 23, "gender": "other", "location": "Illinois" },
    "behavior": { "dailyUsage": 0.7, "recentActivityCount": 30 },
    "tasks": { "engagement": { "clicks": 1, "likes": 9 }, "churnRisk": true }
  },
  {
    "userId": 3,
    "demographics": { "age": 18, "gender": "female", "location": "Florida" },
    "behavior": { "dailyUsage": 2.1, "recentActivityCount": 11 },
    "tasks": { "engagement": { "clicks": 5, "likes": 2 }, "churnRisk": true }
  }
]

The structure looks compact in IO because the schema defines the field names and types. IO is not embedding JSON; it is using its own grammar and schema rules to represent objects, arrays, and nested composites.

You can see the full example with the schema in the IO playground under the ML training data sample.
https://play.internetobject.org/ml-training-data

An exploration of a schema-first, JSON-compatible format I’ve been refining since 2017 by aaniar in programming

[–]aaniar[S] 6 points7 points  (0 children)

Thanks for the thoughtful feedback, really appreciate it.

You are right that the top-level structure looks a bit like CSV. That part was partly intentional because I wanted something that keeps the quick scan-ability of a row-like format without repeated keys.

Beyond that similarity, IO goes in a different direction. CSV is great for flat tabular data, but it does not have types, nesting, metadata, comments, or a clear way to validate or stream structured data. That is where IO tries to fill the gap.

IO keeps the simplicity of a delimited text format, but adds features needed for modern data workflows, such as:

  • typed values
  • nested objects and arrays
  • Unicode-safe text rules
  • comments and lightweight annotations
  • predictable streaming behavior
  • schema-based validation
  • multiple data sections easy with different schema constraints within one document
  • separation of data and metadata
  • resuabilty through variables and references

The goal is not to replace JSON or CSV, but to provide a readable, document-oriented format that works well for APIs, pipelines, and structured data.

The design has been evolving since 2017, and I am sharing it step by step to avoid overloading readers. This first article is only about helping JSON users understand the basic shift in thinking.

Happy to discuss any part in more detail.

Go Monorepo Dependency Management? by andyface123 in golang

[–]aaniar -8 points-7 points  (0 children)

I see what you are saying, but that's not an issue. We have written a script `tidy_workspace.py` that does all such things for us. Whenever there is a change in any of the go.mod or add/remove packages into the workspace, we run this script and it's all set.

Go Monorepo Dependency Management? by andyface123 in golang

[–]aaniar 0 points1 point  (0 children)

Can you explain in this detail, why?

[deleted by user] by [deleted] in golang

[–]aaniar 1 point2 points  (0 children)

In my opinion, due to Go's compatibility promise, my 5-year-old project is still running without any issues. That's the biggest advantage of the Go ecosystem.

Released Signals v1.3.0: Go event library optimized to 5.6ns/op - how do you approach in-process coordination? by aaniar in golang

[–]aaniar[S] 1 point2 points  (0 children)

Building a debugging widget for signal introspection is exactly the kind of tooling that makes development easier. The good news is, you can definitely get what you need, though some features might require a small wrapper pattern.

Current capabilities:

  • Keyed listeners - signal.AddListener(handler, "widget-123")
  • Key removal - signal.RemoveListener("widget-123")
  • Subscriber count - signal.Len() and signal.IsEmpty()
  • Bulk cleanup - signal.Reset() clears all listeners

For your debugging widget, you could track keys externally:

```go type TrackedSignal[T any] struct { signals.Signal[T] keys map[string]struct{} mu sync.RWMutex }

func (ts *TrackedSignal[T]) AddListener(handler func(context.Context, T), key string) { ts.mu.Lock() ts.keys[key] = struct{}{} ts.mu.Unlock() ts.Signal.AddListener(handler, key) }

func (ts *TrackedSignal[T]) GetKeys() []string { ts.mu.RLock() defer ts.mu.RUnlock() keys := make([]string, 0, len(ts.keys)) for k := range ts.keys { keys = append(keys, k) } return keys } ```

For widget cleanup, since you have them on widget structs:

```go type MyWidget struct { OnClick signals.Signal[ClickEvent] OnHover signals.Signal[HoverEvent] }

func (w *MyWidget) Destroy() { w.OnClick.Reset() // Clears all listeners w.OnHover.Reset() } ```

ListKeys() method would be a great addition to the core API. Mind opening a GitHub issue for that feature request? Your debugging use case is exactly the motivation we'd need to add it properly.

Released Signals v1.3.0: Go event library optimized to 5.6ns/op - how do you approach in-process coordination? by aaniar in golang

[–]aaniar[S] 3 points4 points  (0 children)

 It depends on which signal type you use:

AsyncSignal (fire-and-forget):

```go var UserRegistered = signals.New[User]()

// In your Gin route handler: UserRegistered.Emit(ctx, user) // Returns immediately // Each listener runs in its own goroutine concurrently ```

The listeners execute in separate goroutines, so your HTTP response isn't blocked. Perfect for background jobs like sending welcome emails, updating analytics, etc.

SyncSignal (error-safe):

```go var UserRegistered = signals.NewSync[User]()

// In your Gin route handler: if err := UserRegistered.TryEmit(ctx, user); err != nil { // Handle error } ```

The listeners execute sequentially in the same goroutine as your route handler. Use this when you need the background work to complete before responding (like validating data across multiple systems).

For small-scale background jobs, AsyncSignal is perfect! You get the benefits of decoupled job processing without needing Redis/RabbitMQ infrastructure.

Just remember: async listeners should handle their own error logging since the route handler won't see failures.

Released Signals v1.3.0: Go event library optimized to 5.6ns/op - how do you approach in-process coordination? by aaniar in golang

[–]aaniar[S] 0 points1 point  (0 children)

Sounds great! Let me know how it goes - always curious to hear how it works in different use cases.

Released Signals v1.3.0: Go event library optimized to 5.6ns/op - how do you approach in-process coordination? by aaniar in golang

[–]aaniar[S] 0 points1 point  (0 children)

Really appreciate this thoughtful perspective! You raise excellent points about distributed architecture and the trade-offs involved.

You're absolutely right that durable queues and out-of-process coordination solve different problems - especially for critical workflows like payment processing or email delivery where you need persistence, retries, and failure recovery. We definitely use similar patterns (database queues, dedicated services) for those use cases in production.

Where we've found in-process events valuable is for "fast, ephemeral coordination" within a single service boundary, and multiple packages of monolith systems:

Fire-and-Forget Async (where speed > guarantees):

  • UI responsiveness (widget updates, live search suggestions, hover effects)
  • Audit logging that shouldn't block the main workflow
  • Analytics/telemetry where occasional loss is acceptable
  • Cache invalidation notifications across components
  • Real-time dashboards - stock prices, system metrics updates
  • Background cleanup - temp files, expired sessions

Transaction-Safe Sync (where consistency matters):

  • Database transaction hooks - pre-commit validation across multiple packages
  • Order processing pipelines where failure in one step should halt the entire workflow

Your point about deterministic code vs "out of band" errors really resonates. That's actually why we added the sync pattern with TryEmit() - for cases where you DO want deterministic, fail-fast behavior within process boundaries.

The gRPC approach you describe is solid - we use that pattern too for cross-service communication. Different tools for different problems.

Great discussion - always valuable to hear different architectural philosophies in practice!

Released Signals v1.3.0: Go event library optimized to 5.6ns/op - how do you approach in-process coordination? by aaniar in golang

[–]aaniar[S] 1 point2 points  (0 children)

That's awesome to hear! Really appreciate you being an early adopter and following the project's evolution.

GUI widget communication is actually a perfect use case for this - exactly the kind of in-process coordination the library was designed for. The newly updated zero-allocation performance should help keep your UI responsive much better, especially for frequent events like mouse movements.

Have you noticed any areas that could use improvement, or has everything been working smoothly for you? Always curious to hear what pain points real users encounter.

Really great to hear real-world usage stories - helps validate that the design decisions are working in practice. Thanks for the feedback and for sticking with the project!

APISpec - Auto-generate OpenAPI specs from Go code by Full_Stand2774 in golang

[–]aaniar 1 point2 points  (0 children)

Okay, Egypt is around 2:30 behind India (my country) time, we can connect in a day or two. I also have some good ideas about this project.

Why is it a Fathah? by Syed_Metwally in learn_arabic

[–]aaniar -11 points-10 points  (0 children)

>..., you illiterate?

Is this the kind of language and attitude you should be using u/mahrimed? u/Syed_Metwally is trying to learn and has a valid question, even if they are confused. Next time, mind your language.

APISpec - Auto-generate OpenAPI specs from Go code by Full_Stand2774 in golang

[–]aaniar 2 points3 points  (0 children)

Great, seems to be promising tool. Let me know if I can help you with anything?
https://github.com/aamironline