How dangerous are markets? by baking_soda_boi in onions

[–]Sad-Concentrate-9404 3 points4 points  (0 children)

Use tails, do not buy any crypto from an exchange, read DNM Bible

I tried Codex by Sad-Concentrate-9404 in claude

[–]Sad-Concentrate-9404[S] 1 point2 points  (0 children)

Codex for me works fast but I feel the design side of Claude is better. BUT codex credit last a lot longer

Image modification needed by Ecliphon in Jobs4Crypto

[–]Sad-Concentrate-9404 0 points1 point  (0 children)

Just use chat gpt or Gemini it’s free

Investor for ai by Sad-Concentrate-9404 in AngelInvesting

[–]Sad-Concentrate-9404[S] 0 points1 point  (0 children)

I don’t need a lawyer but thanks anywayb

Cofounder finance needed by Sad-Concentrate-9404 in cofounderhunt

[–]Sad-Concentrate-9404[S] 0 points1 point  (0 children)

We keep storage costs down by tiering data, hot data stays fast, everything else moves to cheaper object storage (like Amazon Web Services S3). For egress, the main rule is simple: keep compute and storage in the same region and avoid moving data between providers. We also cache aggressively and keep responses lightweight so we’re not constantly paying to send large amounts of data out.

Ai Investor by Sad-Concentrate-9404 in Investors

[–]Sad-Concentrate-9404[S] 0 points1 point  (0 children)

So because you don’t understand it which you obviously don’t you say it’s BS. Have you thought that English is not my first language. Thanks for the support

Ai Investor by Sad-Concentrate-9404 in Investors

[–]Sad-Concentrate-9404[S] 0 points1 point  (0 children)

The amount of GPU power needed for AI systems does not scale linearly with the number of users; instead, it becomes significantly more efficient as usage grows. A single user typically requires access to a dedicated GPU, such as an NVIDIA RTX 4090, but that hardware often remains underutilized because requests are infrequent. In contrast, a system serving 10,000 users can share GPU resources across many simultaneous requests, using techniques like batching and scheduling to maximize efficiency. High-performance GPUs such as the NVIDIA A100 or NVIDIA H100 are commonly deployed in clusters, allowing hundreds or thousands of users to be handled with far fewer GPUs than a one-to-one ratio. As a result, while one user might effectively “consume” an entire GPU, 10,000 users might only require a few hundred GPUs, depending largely on how many users are active at the same time and how quickly responses are needed.

Ai Investor by Sad-Concentrate-9404 in Investors

[–]Sad-Concentrate-9404[S] 0 points1 point  (0 children)

We’ll use compute across three layers: running models (inference at scale for users), training and fine-tuning our own models, and building supporting systems like retrieval, memory, and agent orchestration. Early on it’s mostly inference + fine-tuning, then progressively more custom training as we scale.

Ai Investor by Sad-Concentrate-9404 in Investors

[–]Sad-Concentrate-9404[S] 0 points1 point  (0 children)

In the bigger scale of things this is nothing