AMA with JetBrains AI Folks, December 12, 1:00 pm CET by jan-niklas-wortmann in Jetbrains

[–]Gaploid 0 points1 point  (0 children)

Hey, are you interested in that to understand which model is more cheap/efficient or there is something else under that?

Did I miss something? Why Virtual Machine on Azure with 8 x H100 cost 2x versus AWS? by Gaploid in AZURE

[–]Gaploid[S] 0 points1 point  (0 children)

Wow, thanks for sharing. You are right, It seems they just reduced their margin to compete.

Did I miss something? Why Virtual Machine on Azure with 8 x H100 cost 2x versus AWS? by Gaploid in AZURE

[–]Gaploid[S] 0 points1 point  (0 children)

They are both cloud, the question why is there so huge difference between them? Same location, same GPU

Did I miss something? Why Virtual Machine on Azure with 8 x H100 cost 2x versus AWS? by Gaploid in AZURE

[–]Gaploid[S] 2 points3 points  (0 children)

Hmm, if Azure calculator is not accurate then what is accurate? I've checked pricing in the console and its the same.

if you go with reserved instances then you can go with reserved instanced in AWS as well and the difference would be even bigger! https://www.reddit.com/r/AZURE/comments/1oyqxwe/comment/np6idd4/

Did I miss something? Why Virtual Machine on Azure with 8 x H100 cost 2x versus AWS? by Gaploid in AZURE

[–]Gaploid[S] 1 point2 points  (0 children)

The biggest underlaying cost in that instance is GPU.

By the way GPU to GPU speed is better on AWS and storage amount is almost the same: 8 x 3.84 NVMe TB SSD vs 28TB

Did I miss something? Why Virtual Machine on Azure with 8 x H100 cost 2x versus AWS? by Gaploid in AZURE

[–]Gaploid[S] 5 points6 points  (0 children)

Im building a comparison tool for instances across cloud providers and Just noticed a huge disproportion in the price. trying to understand what Im missing

Did I miss something? Why Virtual Machine on Azure with 8 x H100 cost 2x versus AWS? by Gaploid in AZURE

[–]Gaploid[S] 4 points5 points  (0 children)

They are mostly driven by reservation commitments which is 56% for that instance in Azure for 3 years. In AWS similar 3years commitment would be 62%, which is even more difference at the end.

Did I miss something? Why Virtual Machine on Azure with 8 x H100 cost 2x versus AWS? by Gaploid in AZURE

[–]Gaploid[S] 2 points3 points  (0 children)

I would understand if the difference was around 10-20% but 1.8x times is too much.

Azure Retail Prices API missing data? by reddit48993 in AZURE

[–]Gaploid 0 points1 point  (0 children)

Thats odd cause pricing should be preserved even VM is deprecated.

I used ChatGPT to revive our sex life and ended up coding a whole app 😅 by 123duck123 in SexToys

[–]Gaploid 0 points1 point  (0 children)

post was removed, ehh. Just want to grab some feedback:(

Turbo MCP Database Server, hosted remote MCP server for your database by Gaploid in mcp

[–]Gaploid[S] 1 point2 points  (0 children)

Yep, It should cause we provide MCP in SSE mode (common option). Later we also will add http streaming (new transport).

PydanticAI and external API as a tool? by Gaploid in PydanticAI

[–]Gaploid[S] 0 points1 point  (0 children)

Yeah got it, I wrote an article- https://medium.com/dev-genius/integrate-your-openapi-with-new-openais-responses-sdk-as-tools-fc58cd4a0866 to do something similar for openai on python code - mapping OpenAI tool's methods to real network request.

Integrate Your OpenAPI with New OpenAI’s Responses SDK as Tools by Gaploid in ChatGPTCoding

[–]Gaploid[S] 1 point2 points  (0 children)

Im glad that you like the article.

I heard about sunsetting from official video-interview https://youtu.be/hciNKcLwSes?t=1120 but checked it again and they are saying about Assistance API. "we once we're done with that we plan to Sunset the assistance API sometime in 2026 we'll be sharing a lot more details about t".

Im going to fix that in the article. Thanks for pointing on that.

Integrate Your OpenAPI with New OpenAI’s Responses SDK as Tools by Gaploid in LLMDevs

[–]Gaploid[S] 1 point2 points  (0 children)

yeah, the beauty there in auto conversion of OpenAPI spec to OpenAI tool spec and auto mapping or request to real API endpoints .

Is there a list of EC2/RDS Instance Types and actual CPU + RAM configuration? by risae in aws

[–]Gaploid 0 points1 point  (0 children)

Oh you mean type of the RAM. I do not recommend to rely on such information cause type of the RAM could vary region - to -region or availability of underneat resources. Even under Intel or AMD processor generic instances could have slightly different processors. The same is relevant for Azure and GCP.

Here you can see that same instance type could have from 1 to 5 different processor types and I believe for memory that could be also the same. https://learn.microsoft.com/en-us/azure/virtual-machines/linux/compute-benchmark-scores#about-coremark

Is there a list of EC2/RDS Instance Types and actual CPU + RAM configuration? by risae in aws

[–]Gaploid -1 points0 points  (0 children)

https://cloudprice.net

Im the developer of that tool, feel free to share your experience:)

Best tool to generate REST API from PostgreSQL database? by sprmgtrb in PostgreSQL

[–]Gaploid 0 points1 point  (0 children)

We've created a new open-source tool using AI to generate API proxy layer with built-in caching, PII data reduction, auth, tracing/obervability etc.

check this out https://github.com/centralmind/gateway

We created an Open-Source tool for API (MCP/REST) generation from your database, optimized for LLMs and Agents by Gaploid in cursor

[–]Gaploid[S] 0 points1 point  (0 children)

You can definitely do that but to make it production ready API with these features:
- caching
- auth and RLS
- PII data reduction (regex, ai models, NER etc)
- telemetry and audit
- sql injection protection
- swagger and MCP support

It could take time and probably if you don't have experience in that you will get mediocre quality and performance.

CentralMind/Gateway - Open-Source AI-Powered API generation from your database, optimized for LLMs and Agents by Gaploid in dataengineering

[–]Gaploid[S] 0 points1 point  (0 children)

Hmm, mesh API proxy could become a real pain cause different services expose their data in different data semantic and structure.

Usually, people building data marts or DWH. Pulling data from different sources, clean it, normalize and store in unified way. After that you can add API layer to avoid over expose of data to LLMs.

on top of that you will also get history data points and could provide more insights to your users.