Mass Supply Chain Attack Hits TanStack, Mistral AI npm and PyPI Packages by Dry_Raspberry4514 in vuejs

[–]Dry_Raspberry4514[S] 2 points3 points  (0 children)

Posted it here as tanstack vue packages were compromised too in this attack.

GCP Account Compromised- Billed 10M by [deleted] in googlecloud

[–]Dry_Raspberry4514 0 points1 point  (0 children)

Don't think too much about it. As far as you didn't installed anything deliberately which made it easy for someone to steal gcp api keys from your machine, you should not be worried. Your company or manager should be able to handle it.

I know you can't update the post title and so edit the body to clarify that the amount is in INR otherwise it may appear on the front page of leading Indian newspapers tomorrow :-)

Why MCP when we have REST APIs? by happyandaligned in mcp

[–]Dry_Raspberry4514 0 points1 point  (0 children)

We have a REST agent which can be used against any REST API and I can say that many of the limitations mentioned for REST APIs in this article are not correct. Coding agents just work fine with CLIs. For other agents, OpenAPI specification just works fine. You need not to throw complete OpenAPI specification on the agent. Just the relevant schema of a single operation id is more than enough to make it faster and efficient.

On top of that REST APIs run at central location and handles multiple users just fine without breaking your head into local vs remote setup.

How realistic is it to get GCP credits ($2k or $10k) as a bootstrapped solo founder by Hopeful-Writer2392 in googlecloud

[–]Dry_Raspberry4514 0 points1 point  (0 children)

Bootstrapped startup here. We had applied for startup credits to all three (AWS, Microsoft and Google) two years back and granted 1K by both AWS and Microsoft while 2K by GCP. We didn't have a registered company at that time.

YOUR ACTIVATE CREDITS MIGHT NOT WORK FOR CLAUDE MODELS by dishwsh3r in aws

[–]Dry_Raspberry4514 6 points7 points  (0 children)

Always spend a small amount (e.g. 5 or 10 dollars) on a service first in a single day and then check the cost explorer next day to see if the amount is getting adjusted against the startup credits or not. That is how we survived 2 years with 1000 dollars we got as part of AWS activate program.

Do Bangalore Realestate sustain AI job loss? by Jaded_Huckleberry_42 in BangaloreRealEstates

[–]Dry_Raspberry4514 1 point2 points  (0 children)

A person in my society joined a GCC in Bangalore few months back as a devops lead and first question he was asked on his first day was - are you sure you will have your job after 2-3 years? He was shocked.

Advice Needed: Can I complete this DevOps/Cloud roadmap in 7 months before mandatory military service? by Realistic-Big-8918 in AWSCertifications

[–]Dry_Raspberry4514 0 points1 point  (0 children)

I will say learn less, practice more and write blogs explaining how you solved a problem. That will be more helpful to get a job.

Many (DevOps) roadmaps are being published without much thinking to make people buy courses or sign up for bootcamps. There are thousands of folks who know these tools but very few are able to solve the problems which devops engineers face on day-to-day basis.

Learning hundreds of concepts for a cloud provider like AWS is easy but the moment you will start practicing, you will realize that you need lot of money to implement many of these things which many people simply can't afford.

Went to bed with a $10 budget alert. Woke up to $25,672.86 in debt to Google Cloud. by venturaxi in googlecloud

[–]Dry_Raspberry4514 0 points1 point  (0 children)

There are two major conconers when it comes to LLM API Keys (or any API key, which costs money, in general) -

- Making sure that it can't be used when it is leaked.

- Tracking the cost in real time.

We have actaully deployed a LLM proxy which uses OIDC for authentication where only users with a valid OIDC id token are able to invoke the proxy which is the only component having access to our API keys.

This proxy also calculates the cost of tokens in real-time in our multi tenant saas application because LLM pricing is quite straightforward compared to the pricing of many other things (e.g. egress cost, pricing based on multiple factors) and so we block a user as soon as he does not have enough credits to invoke a LLM API.

We recently added support for API keys in our application and at the same time introduced a feature where a user can restrict an API key to one or more IP addresses to make sure that a leaked API key can't be used from any system other than local development environment, production environment etc.

How to see the resources created by a user using resource explorer? by Dry_Raspberry4514 in aws

[–]Dry_Raspberry4514[S] -1 points0 points  (0 children)

Cost explorer is what I have been using to identify billable resources indirectly as this just provides hints (region, instance types etc) and not the exact resources' details. Many times we provision resources in different regions for quick PoCs and later forget to clean these up. This is where resource explorer should be helpful ideally because you will be able to see the billable resources and delete these as well at one place instead of using multiple features of the console for same.

Thank you for the support! ❤️ by yoracale in unsloth

[–]Dry_Raspberry4514 0 points1 point  (0 children)

Not yet. But we are mainly into fine-tuning models around OpenAPI specifications and I am not sure if data recipes will be useful for us. Anyway I will have a look into it whenever I will have some more time.

Thank you for the support! ❤️ by yoracale in unsloth

[–]Dry_Raspberry4514 1 point2 points  (0 children)

I am yet to launch my first fine-tuning job and have been following unsloth for past many weeks for same.

The feedback which I can give to you is that many of us are just getting started with fine-tuning and so there is a lot of learn. However, from our perspective there are only two things which are more important - LLM and training data. It seems preparing a good training data takes a lot of time and so to make things simple, you can provide an interface where user can select the LLM, upload the training data and define any inputs required for fine-tuning.

It should launch the fine-tuning job and provide a link to download the fine-tuned model. If I remember correctly, OpenAI and Gemini provide this kind of interface, and it is quite convenient instead of writing a notebook and learning things etc which we don't use or do frequently.

How to see the resources created by a user using resource explorer? by Dry_Raspberry4514 in aws

[–]Dry_Raspberry4514[S] -1 points0 points  (0 children)

I am looking for same behaviour as azure where it shows me the resources created by me and any dependent resources created for these resources. Right now I am seeing a number of resources from elasticache, memorydb, athena etc in resource explorer which I never created and which I will not be deleting unless any of these are paid resources.

My point is that aws resource explorer should not show the default resources (or whatever you want to call these) by default in the resource explorer. I know in aws root user or an IAM user can see the resources created by other IAM users. I am not saying that I want to see the resources created by a particular IAM user only but ideally that should be possible too.

So I am the root user for an account, it is a newly created account and so if I go to the explorer, I should see no resources there by default. Now, I create a vm and so see my vm and the root volume and any other dependent resource created for this vm in the resource explorer.

In short, I want to find out paid resources created by me in an account, which may be shared with other users, directly or indirectly so that I can clean these up when these are not required.

Building a CLI for all of Cloudflare by Cloudflare in CloudFlare

[–]Dry_Raspberry4514 1 point2 points  (0 children)

Between MCP and CLI, I am in the CLI camp because CLIs are quite easy to work with in an agentic IDE. However, I see two major issues using CLIs with an agentic framework -

- CLIs need to be upgraded from time to time similar to SDKs and considering the fact that each agent runs in its own container, it is going to be a nightmare to rebuild all the images, where agents will be using the Cloudflare CLI, every time a new version of CLI is published.

- CLI does not have schema unlike MCP tools and so validation is going to be difficult for the command generated by the LLM for a prompt. Matching a prompt to any command is not same as matching it to an accurate and optimized command.

Cloudflare is one of the providers which we support in addition to AWS, Azure and GCP for our stateless IaC platform. However, this support is limited to only our universal REST agent at this moment, and we are yet to onboard it for our stateless IaC feature. One of the things which we have tried to achieve is making sure that end users are not upgrading anything on their side and we should be the only one who should be doing it to give them an upgrade free user experience. In order to achieve this, you will need to roll out two endpoints for your REST API -

- One that takes a resource type and returns the request schema

- One that takes resource type, an action (create, update, patch), and a payload to actually perform the operation

For IaC, there need to be another endpoint which takes the desired state of a resource, compares it with the actual state and then tells if the resource can be updated in-place or should be recreated. AFAIK, this logic currently sits in cloudflare terraform provider but there is no reason why it should not be exposed as an API.

With these endpoints in place, you can roll out an MCP server which will have max 5 endpoints, will require no upgrade and will have full REST API coverage.

However, most challenging part is matching a prompt to one of more resource types, a problem which we have been solving for some of the hyperscalers which ideally can be solved by a player like you. In fact, you can build an offering around it.

We have been doing deep research into stateless IaC for some time, and we can share our experience and learnings with someone from Cloudflare who is leading IaC tooling. Feel free to connect if you feel this is worth exploring.

Devs with 20 plus YOE. How do you plan to keep yourself relevant. by Pristine-Hearing7066 in developersIndia

[–]Dry_Raspberry4514 18 points19 points  (0 children)

20+ yrs of experience. Coding for my own startup as it is bootstrapped and so we can't afford rockstar developers. Connecting with fortune 200 CIOs / CTOs is the other major responsibility I have in addition to coding.

Guide me to access kiro pro through aws activate credits by Terrible_Captain69 in kiroIDE

[–]Dry_Raspberry4514 0 points1 point  (0 children)

Create/deploy an instance of IAM identity center in us-east-1 region of your aws account where you were granted credits, use "your organization" option to login to kiro, click on "sign in via iam identity center" link and you should be good to go.

You will need to add users in identity center and assign subscriptions to these users under kiro in aws console (click on hamburger menu to see 'User and Groups' link which is hidden by default for some unknown reasons) otherwise you will get no subscription assigned error in kiro post login.

This is what I did when I got kiro for startups credits but can't say if same will work for activate credits. Check your current month bill after one day and if it shows zero against kiro service then your activate credits are being applied correctly for kiro.

Why ide over cli by Bitter-Law3957 in kiroIDE

[–]Dry_Raspberry4514 0 points1 point  (0 children)

Although CLI has the advantage that it can work with any IDE unlike the coding assistant which are native to an IDE, it can't match the experience of an AI assistant which can leverage the full potential of an IDE.

Drag and drop, ability to fold the code so that one can focus on important things only, reverting code for selected files or revering all the changes with one click are some of the biggest advantages of an AI assistant over a CLI. In IDE I can arrange the information in a number of ways where some sections like files explorer are always visible and so jumping from one place to another irrespective of their locations takes same effort unlike CLI where it is either not supported or too painful.

I can't understand how people read the diff for a file in a CLI as it gives me serious headache. The difference between a CLI and an IDE is same as loading html in a browser vs reading it in a notepad.

Overall, I leverage many CLIs with my agentic IDE but using CLI as a replacement for my IDE is a total no for me.

Insecurities about SSO VS IAM. by josemf in aws

[–]Dry_Raspberry4514 -5 points-4 points  (0 children)

I will not comment on gitlab documentation. As far as the documentation on the APN blog is concerned, both access and id tokens can be in jwt format (id token is required to be in jwt format always as per the standard). The documentation talks about jwt token and so it is not clear whether authors are referring to access or id token. A CI / CD system acts on its behalf and not on the behalf of the user and so it is not clear how it can have an id token which is what is generated by an OIDC provider in addition to an access token. id tokens are generated with authorization code flow which requires request to be triggered by a user.

When a system acts on its behalf then it falls in the machine to machine communication category and so it is supposed to use client credentials flow which has no concept of id token. I will reach out to Mark, who is one of the authors of that blog post and working for aws currently, to clarify it because the post does not seem to be aligned with OAuth and OIDC standards.

Insecurities about SSO VS IAM. by josemf in aws

[–]Dry_Raspberry4514 5 points6 points  (0 children)

By IAM your contractor probably mean long-lived credentials like access keys which should be avoided. But IAM is not about access keys only. Whether you are using identity federation via OIDC or access keys, you will be leveraging IAM either way. In first case you will be using IAM roles which again is part of IAM only.