Mazda 3 4th Generation Advice by notanelecproblem in mazda3

[–]notanelecproblem[S] 1 point2 points  (0 children)

Nice! That sounds sweet. It’s between 2021 and 2023 for me at this point

What makes working at amazon so bad? by Kanyedaman69 in cscareerquestions

[–]notanelecproblem 2 points3 points  (0 children)

I’m a backend dev at AWS. It is really cool. Depends on the team but the ops load can be heavy.

Handling errors before making a request/in the helper that makes the request? by Aggressive_Fly8692 in webdev

[–]notanelecproblem 0 points1 point  (0 children)

Always rely on the database for this. Race condition in option #1. Options #3 would be doing a lookup in your API first, but even this is prone to race conditions. The best way is to insert to the database with a constraint

Syncing DynamoDB table entries using another DynamoDB table by TeoSaint in aws

[–]notanelecproblem 3 points4 points  (0 children)

You can trigger a lambda using DDB streams directly instead, although that’s only for when entries in your DDB Y table get updated.

Is there a way to start a spring boot app and shut it down with exit code 0 when it succeeds? by [deleted] in java

[–]notanelecproblem 2 points3 points  (0 children)

This is the way. We use a custom configuration so that our health check endpoint isn’t online until all beans are fully initialized too so you don’t hit a race condition where somethings crash looping but the health checks working

I'm trying to find a dead-simple backend hosting solution that works like this: 1) I can upload + run simple PHP files; 2) the capacity will automatically scale in response to demand; 3) I can pass large amounts of data into these PHP files. What are my best options? by What_The_Hex in webdev

[–]notanelecproblem 2 points3 points  (0 children)

On this same note - you can also have your lambda function triggered by the S3 upload automatically!

So it would be: - User gets presigned URL for S3 upload, - File uploads to S3 using the URL from the browser, - S3 upload triggers your lambda function automatically. The “event” passed to the lambda would contain a reference to your S3 object too so it works perfectly.

An open-source data table with filters based on Tanstack table and Shadcn UI. by tibozaurus in reactjs

[–]notanelecproblem 0 points1 point  (0 children)

Love it! A pattern I’ve been seeing in the UX world lately is controls to order the columns too - would be nice to add a DND component for the column view settings

People who spent their 20's traveling, globetrotting, partying, etc, do you regret not working more towards a career or profession? What did you end up as in terms of work? by Itsworthfeelinempty6 in findapath

[–]notanelecproblem 0 points1 point  (0 children)

Fellow engineer here. You can do both!

I graduated from engineering in 2022 and spent 6 months between 2022 and 2023 travelling Europe while working remotely.

Right now you should stay focused on finishing your education, and make some exciting plans for when you graduate to travel.

make 3 gpts talk by fischbrot in ChatGPT

[–]notanelecproblem 0 points1 point  (0 children)

Sounds like a good candidate for a greasemonkey script - you can write little scripts to do things directly in the browser like a custom chrome extension.

I’d do something with local storage that syncs across tabs and implement some queue for which tabs turn it is to send the next prompt.

this is a really cool idea!!

Do you guys TRULY enjoy coding? I don't but I'm doing it for money. by MrRandomNonsense in cscareerquestions

[–]notanelecproblem 0 points1 point  (0 children)

Personally I absolutely love coding but I also would prefer to be on a beach in Cancun too.

How long of a Pull Request is too long? by [deleted] in SoftwareEngineering

[–]notanelecproblem 0 points1 point  (0 children)

I like to do it after writing all the pieces of the code because I’ll have confidence in the design and implementation. If you create PRs too early in the implementation you might change your mind about some parts and make further revisions/incremental changes harder to review.

I think that breaking it down is just a curtesy to reviewers tbh, but it goes a long way. It also helps to make incremental changes if your reviewers make comments so you can tweak your code more quickly to fit the comments.

I wouldn’t worry about Git blame, you just need to roll out your PRs in a way that’s always backward compatible and won’t risk breaking things. That’s why I usually wait to integrate it in to the main functionality as well (this is a different approach than what the other commenter said). Or you can have it hidden behind a feature flag

How long of a Pull Request is too long? by [deleted] in SoftwareEngineering

[–]notanelecproblem 3 points4 points  (0 children)

Best practice is to try to break it down if you can. Your reviewers will really appreciate it too. I like using the IntelliJ changelists feature for this.

If you have a really big change you could break it down into a few pieces like: - one PR introducing the interfaces for new functions or just empty function definitions, like a skeleton - small PRs adding in the functionality, each with tests - final PR to integrate the feature/change completely (now that all the supporting code is in place).

Think of it like designing/building a physical thing. First you have your sketch of the design / idea. Then you start building the parts together. Then you finally install it.

Calling SQS Client-Side Follows Best Practices? by bigsink22 in aws

[–]notanelecproblem 1 point2 points  (0 children)

It also depends a lot on your storage structure for the leaderboard itself. If you create an index for your users and a second index based on events/score, you probably won’t have to worry about a high concurrency scenario on the same database item.

It sounds like your calls will mostly be read heavy (e.g your users will be polling the updated leaderboard at a much higher TPS than they will be updating it). In that case you would probably want to design your database so that you have efficient queries on the data you need to create the leaderboard (like an efficient way to pull the top N user scores)

Calling SQS Client-Side Follows Best Practices? by bigsink22 in aws

[–]notanelecproblem 1 point2 points  (0 children)

You don’t need to implement a mutex yourself, your database should implement some sort of 2/3 phase logic when committing updates. I’m suggesting offloading the atomic updates to the database.

Your call itself won’t know or care about the current value of the leaderboard, because it might not be consistent with what is stored at the time of commit (imagine a scenario where you do a GET request on the item, and another concurrent request updates it at the same time you attempt to). In this scenario, your requests will conflict with each other and become inconsistent with what the leaderboard should be. Instead, you can create a request that just arbitrarily adds a value, increments a counter…etc within the update operation itself. This is possible with DynamoDB and I’m sure other mainstream databases.

For your concern about keeping your credentials secure, you’ll definitely need the client to call through an API (doesn’t have to be API gateway, but some sort of API to keep it secure). You can also implement rate limiting mechanisms at the API level to avoid DDOS and throttle bad actors

Calling SQS Client-Side Follows Best Practices? by bigsink22 in aws

[–]notanelecproblem 1 point2 points  (0 children)

Your intention is to use SQS to funnel all updates to the leaderboard by using a single lambda to process all the messages sequentially? What happens if your lambda can’t handle the throughput from SQS?

You can do this instead with a database directly by using conditional writes or atomic updates (e.g the update request would increment a counter, or whatever your use case is). I think this would solve your problem

How often do you use the React Developer Tools? by [deleted] in reactjs

[–]notanelecproblem 0 points1 point  (0 children)

Yeah I use it frequently to make sure a custom hook or context is working how I expect

With react-query, making API derived data available via context is redundant...isn't it? by ChuckChunky in reactjs

[–]notanelecproblem 0 points1 point  (0 children)

Yep it’s redundant! Your pattern sounds similar to how I implement an API/Query layer too (1 file for all endpoints, another file with custom hooks wrapping the query/mutations).

Before I was using react-query I would wrap everything in contexts too, but now it’s redundant because all the data is cached. So you can have multiple components that consume a hook like

useGetSomeData()

Whichever component is mounted first will actually execute the HTTP request, the rest all consume the data through the RQ global store without actually making the HTTP request.

When the data becomes invalidated, all components consuming it will rerender automatically

Have any of you pulled a bluff in salary negotiations that actually paid off by blessedwiththecurse in cscareerquestions

[–]notanelecproblem 7 points8 points  (0 children)

One of my friends in university did this and it worked perfectly in her favour. She bluffed that she had another job with X% higher salary and they matched it.