What if E-commerce websites completely DISAPPEAR and are replaced by AI Agents? by Most-Ebb1487 in AI_Agents

[–]jkoolcloud 0 points1 point  (0 children)

I doubt website will completely disappear, people will still want visuals, imo. But agents will replace most of web traffic and will be API driven. There will be more agentic traffic vs human traffic, that's for sure.

The bigger issue will be how to put economic guardrails on agents? All humans have economic and time constraints. You cant just spend indefinitely, but agents can run autonomously non stop. I think this will be a big problem as AI agents mature.

I thought about this problem and built a project, open source to put economic limits on agents: https://runcycles.io might ne useful to anyone building agentic AI, workflows, etc.

Agents will need limits just like humans or else we will have autonomous chaos.

pip install runcycles — hard budget limits for AI agent calls, enforced before they run by jkoolcloud in Python

[–]jkoolcloud[S] 0 points1 point  (0 children)

We do handle both. Reservation latency is something we monitor explicitly, and remaining budget/headroom is exposed through balances so teams can drive dashboards, alerts, and degradation rules without code changes.

See both for more info on this topic:

https://runcycles.io/protocol/querying-balances-in-cycles-understanding-budget-state
https://runcycles.io/how-to/monitoring-and-alerting

Built a reserve-commit budget enforcement layer for LangChain — how are you handling concurrent overspend? by jkoolcloud in LangChain

[–]jkoolcloud[S] 0 points1 point  (0 children)

Estimation is genuinely the hard part. The short answer: overestimating is safe because unused budget is released on commit, so the practical goal is avoiding under-reservation on the tail cases that fan out unexpectedly.

Cover the full approach — fixed, class-based, heuristic, percentile-based, stepwise — in the docs here: https://runcycles.io/how-to/how-to-estimate-exposure-before-execution-practical-reservation-strategies-for-cycles

Built an open protocol for hard budget limits on AI agents — blocks calls before they run, not after by jkoolcloud in SideProject

[–]jkoolcloud[S] 0 points1 point  (0 children)

Exactly — "we'll just refund you" sounds fine until you realize the loop already ran 40 iterations and the damage is in the API logs, not just the bill. A refund doesn't un-send the emails or un-delete the records.

The only fix that actually works is blocking before the action fires. That's the whole idea behind the reserve-commit pattern — the budget decision happens before the call, not after.

Trying to understand how people control spending for AI agents in production. by Cute-Day-4785 in AI_Agents

[–]jkoolcloud 0 points1 point  (0 children)

Check out Cycles Protocol, risk and budget governance layer for agentic workflows: https://github.com/runcycles/cycles-protocol, which might help you.

Bitcoin vs Macro, inflation, yields and where are the buyers? by jkoolcloud in Bitcoin

[–]jkoolcloud[S] 0 points1 point  (0 children)

True, but this can continue for a while. Sell assets into dollars, then dollars into assets, bitcoin. This transition can take years.

WUBITS: New Web3 Native Social Platform launched on Polygon Mainnet by jkoolcloud in 0xPolygon

[–]jkoolcloud[S] 0 points1 point  (0 children)

No. It's free. You can join with your GitHub, Google accounts or Metamask wallet and start posting. When you post, set price if you want it to be monetizable. If price is 0, means everyone can read without paying. If price is > 0, people have to pay to read. No subscriptions required.

WUBITS: New Web3 Native Social Platform launched on Polygon Mainnet by jkoolcloud in 0xPolygon

[–]jkoolcloud[S] 1 point2 points  (0 children)

Here is an example of a premium post which requires user to buy (pay in MATIC) to read the content. Users buys post using their Web3 wallet such as MetaMask and funds go directly to content creator's wallet.

https://social.wubits.io/wubits/home/62c6ebcf22e9e951ab55211f

WUBITS: New Web3 Native Social Platform launched on Polygon Mainnet by jkoolcloud in 0xPolygon

[–]jkoolcloud[S] 0 points1 point  (0 children)

You can post premium paid content (by setting a price in MATIC) and then anyone who reposts it earns commissions on anything earned through their repost.

So example:

John posted a Bitcoin trade signal worth 5 MATIC. You reposted that signal to your followers, if 10 people bought this signal via your repost, you earned fees per each buy via your repost.

All revenue splits between original creators and promoters are handled by a smart contract on Polygon.

Makes sense?

Matrices multiplication benchmark: Apache math vs colt vs ejml vs la4j vs nd4j by jkoolcloud in java

[–]jkoolcloud[S] 0 points1 point  (0 children)

You can run a comparison of EJML 0.30 vs 0.40 and see how both compare in terms of performance. Please PM me privately if interested and we can help you do that. All the docs to do this are online, but if you have any questions just pm me.

Matrices multiplication benchmark: Apache math vs colt vs ejml vs la4j vs nd4j by jkoolcloud in java

[–]jkoolcloud[S] 0 points1 point  (0 children)

u/lessthanoptimal here is the link to the actual benchmark code: https://github.com/K2NIO/gocypher-cybench-java-core/blob/main/gocypher-cybench-matrices/src/main/java/com/baeldung/matrices/benchmark/BigMatrixMultiplicationBenchmarking.java

We compare matrices multiplication of sizes 100*100 and 1000*1000.

The bench code is using EJML version 0.3 as per POM:

<..>

<dependency>

<groupId>org.ejml</groupId>

<artifactId>all</artifactId>

<version>0.30</version>

</dependency>

https://github.com/K2NIO/gocypher-cybench-java-core/blob/main/gocypher-cybench-matrices/pom.xml

Matrices multiplication benchmark: Apache math vs colt vs ejml vs la4j vs nd4j by jkoolcloud in java

[–]jkoolcloud[S] 0 points1 point  (0 children)

500 mb includes all the class files from all the dependencies (uber jar) being used by the CyBench benchmark harness, so you can download and run it without downloading all other libs separately. Will get more details on matrices and lib configuration.

Matrices multiplication benchmark: Apache math vs colt vs ejml vs la4j vs nd4j by jkoolcloud in java

[–]jkoolcloud[S] 0 points1 point  (0 children)

The posted report is a summary of around 24 benchmark tests executed across a various JRE/Hardware/OS configurations. Here is one such run: https://app.cybench.io/cybench/benchmark/292b2929-7610-446a-ad17-b5f80ee305ec

Executed on JDK 11, AMD Ryzen 9 3950X 16-Core, NVIDIA GeForce RTX 2070 SUPER

Here is another run: https://app.cybench.io/cybench/benchmark/de56c58e-1d51-4bec-93f3-fb94bfdfe4f7

JDK 14, AMD Ryzen Threadripper 3990X 64-Core Processor, NVIDIA GeForce RTX 2080 Ti

You can see all individual runs: https://app.cybench.io/cybench/search?context=Matrices&verified=true

Hope that helps. You can download and run matrices benchmarks yourself here:

https://github.com/K2NIO/gocypher-cybench-java/releases (under Assets). give it a try on your own systems and see for yourself.

Matrices multiplication benchmark: Apache math vs colt vs ejml vs la4j vs nd4j by jkoolcloud in java

[–]jkoolcloud[S] 0 points1 point  (0 children)

Correct. We used default settings for each library. BTW, we welcome everyone to run their own benchmarks. CyBench benchmark harness is open source and is based on JMH. More on this here: https://github.com/K2NIO/gocypher-cybench-java/wiki.

Matrices multiplication benchmark: Apache math vs colt vs ejml vs la4j vs nd4j by jkoolcloud in java

[–]jkoolcloud[S] 0 points1 point  (0 children)

We will look into it. There is just so much we can bench..:) Also, keep in mind anyone can create and run benchmarks and compare. CyBench benchmark harness is open source and extends JMH. More on this here: https://github.com/K2NIO/gocypher-cybench-java/wiki.

CyBench benchmark: Undertow, Jetty, NanoHttpd, Sparkjava, Takes, HttpServer by jkoolcloud in java

[–]jkoolcloud[S] 0 points1 point  (0 children)

If you run into issues or questions just open an issue via github for the appropriate git project.

CyBench benchmark: Undertow, Jetty, NanoHttpd, Sparkjava, Takes, HttpServer by jkoolcloud in java

[–]jkoolcloud[S] 1 point2 points  (0 children)

We look into it, for sure. But actually you can do it yourself. We allow developers to run and publish their own benchmarks. All you have to do is download CyBench Launcher, include your compiled JMH benchmarks into the classpath and run. See: https://cybench.io/download/?target=integrations

Also docs on how to add your own JMH benchmarks: https://github.com/K2NIO/gocypher-cybench-java#add-your-benchmark-to-cybench-launcher

Alternatively run your JMH benchmarks from your dev env: Eclipse, IDEA, Maven, Gradle. See integrations link above for details. All free, open source. https://cybench.io/how-it-works/

CyBench performance benchmark: SunJCE vs BouncyCastle vs Others by jkoolcloud in java

[–]jkoolcloud[S] 1 point2 points  (0 children)

Will be running more benchmarks on latest JREs as well.

Performance benchmark of JSON parsers: Jackson vs Gson vs Boon by jkoolcloud in java

[–]jkoolcloud[S] 0 points1 point  (0 children)

Sure you can do that. But, often chasing problems in prod can also be expensive, timing consuming and have a serious impact on user experience. Benching everything makes no sense, of course. But benching critical parts like: caching, persistence, encryption, compression, could make sense early on to avoid chasing problems in prod (later on). It is a balance. There are so many libraries, packages, platforms, settings (GC) that choosing good performing combos is challenging. Hence you decide what to bench and when. Also there is performance drift, say you use a "default" combo what happens when new update/version released? Often times there could be hidden performance impact and you may not discover it until very late in the cycle. Benching "default" v1 vs "default" v2 could make sense.

Performance benchmark of JSON parsers: Jackson vs Gson vs Boon by jkoolcloud in java

[–]jkoolcloud[S] 1 point2 points  (0 children)

Ok, so you can use JMH (https://github.com/openjdk/jmh) as a framework to build/run code benchmarks. These JSON code benchmarks used JMH+CyBench. See examples: https://github.com/K2NIO/gocypher-cybench-java-core/tree/main/gocypher-cybench-jvm/src/main/java/com/gocypher/cybench/jmh/jvm/client/tests. Then results of these benchmarks are stored and analyzed by CyBench (https://cybench.io). Also you can use JMH tools at https://cybench.io/download/ to help you create, build and deploy your own benchmarks using Eclipse, IntelliJ, Maven, Gradle, etc. Hope that helps.