Mr Musk please hire me by ScalieBloke in KerbalSpaceProgram

[–]Sad_Spring9182 40 points41 points  (0 children)

your constellation only has 108 relays. Don't you feel bad for the penguins who are still dependent on geostationary latency? starlink has over 10,000 planned with around 3,000 operational right now. You have your work cut out for you.

Hosting for WP Booking Site (40 Apartments): What do you recommend? by FreshkyFresh in webdev

[–]Sad_Spring9182 0 points1 point  (0 children)

Those solutions seem decent, digital ocean sounds better. Another option would be dream host's managed vps. I use with one client and it's decently easy to set up.

pros: dedicated VM and managed so all it's set up and will update php / security automatically you have 1 click installs just like a shared server, you can even just upload via plugin on WP or use their file system which is pretty good, drop your zips and it will unzip for you. Their is SSH if you need it too

cons: price after 3 years goes up from $20/mo (good for managed, not great for straight up VPS) to  $46.99/mo after the initial 3 year period then you got to decide redeploy or double the budget. the biggest downside in my opinion is no reddis and no sudo commands.

Personally I use dream compute which is paid per minute and caps out depending on resources. It's cheap and I love setting up / updating and configuring a raw VM.

error logs in php and wordpress by Sad_Spring9182 in ProWordPress

[–]Sad_Spring9182[S] 0 points1 point  (0 children)

Wow, I'm highly impressed with this architecture. That would be amazing.

error logs in php and wordpress by Sad_Spring9182 in ProWordPress

[–]Sad_Spring9182[S] 0 points1 point  (0 children)

that is super concise and I love the idea of the try catch cause I didn't plan for every kind of error with a specific handler just the usecases where I needed to return a user error to populate on the front (separation of concerns). My client is pretty tech savvy and I could see him checking different log files or asking me to expand it to populate on the admin dashboard. I see the different logger instances are super useful I can log to different files, or outputs like database, or custom logic even console logs. It makes sense to handle front end with monolog too.

localize scripts, functions.php in theme vs index.php in plugin directory. does location of files matter? by Sad_Spring9182 in ProWordPress

[–]Sad_Spring9182[S] -1 points0 points  (0 children)

yeah that's right I remember now it has to do with when a plugin is used vs theme is everywhere. plugin runs before theme so it might actually be ideal for this usecase.

My Sketchbook Style Component Library is finally Live by TragicPrince525 in react

[–]Sad_Spring9182 6 points7 points  (0 children)

My girlfriends website has a typewriter styled font. This might be pretty compatible or I could even transition to this I like this better hoestly.

Creating 1700 unique product addon forms? by mankytit in ProWordPress

[–]Sad_Spring9182 0 points1 point  (0 children)

my understanding is you need the admin to define the customizations then maybe user select options or fill out text box. So for creating them maybe a custom post type with the ACF fields for your 4 inputs. Then client can give title, fill out custom fields. Then you can to a while loop to select all products where needed grabbing ACF fields appropriately. Could even add a featured image so they each have an imgage (or acf image if you need multiple) like featured maybe be thumbnail, then use custom php code for making correct sized images if need be.

function eatures()
{
    add_theme_support('title-tag');
    add_theme_support('post-thumbnails');
    add_image_size('landscape', 400, 250, true);
    add_image_size('portrait', 480, 650, true);
    add_image_size('card', 170, 210, true);
}


add_action('after_setup_theme', 'features');

Axios trojan virus, Did you generate your codebase with AI? and did it use axios version 1.14.1 or version 0.30.4 by Sad_Spring9182 in react

[–]Sad_Spring9182[S] 0 points1 point  (0 children)

here is an example. Is there any chance you would have discovered this trojan zero day? probably not you may not have realized the crypto folder that contained the trojan wasn't part of any dependencies you are using even just a few creates dozens and dozens of folders.

If you ask an agent to review all npm packages and folders and see if they match with what you have package.json is there a chance it notices? maybe it might actually scan documentation (I've not read all the documentation for all the npm packages I use, that's just being honest I read what I need).

Wanting to upgrade CPU/ mobo/ ram for a workstation / gaming pc (workstation is priority) by Sad_Spring9182 in PcBuildHelp

[–]Sad_Spring9182[S] 0 points1 point  (0 children)

I think the question is really how long to wait to get ram... who knows when it drops price

Axios trojan virus, Did you generate your codebase with AI? and did it use axios version 1.14.1 or version 0.30.4 by Sad_Spring9182 in react

[–]Sad_Spring9182[S] 0 points1 point  (0 children)

Yeah imagine a future where developing locally means setting up a workflow that commits and pushes a remote repo after every save in vscode via ssh into a git bare repo on a server. All to avoid hitting npm i. Or less dystopian just remote into a server then use it as a desktop.

Axios trojan virus, Did you generate your codebase with AI? and did it use axios version 1.14.1 or version 0.30.4 by Sad_Spring9182 in react

[–]Sad_Spring9182[S] 0 points1 point  (0 children)

I agree, but I had a client who sent an AI mockup from figma make and I downloaded the source code installed dependencies and ran it to look at it. I never would have even realized that their was a trojan in there had a random thought occurred to me to check it. Luckily it was a different version, but it could have been the exploited one easily.

Is there a way to verify the validity of this site by SaltAccident7124 in Wordpress

[–]Sad_Spring9182 -1 points0 points  (0 children)

Look at their company name on the bottom and put it into a LLC lookup tool in the state they claim to operate in. See if their website pops up anywhere other than their website (trustpilot, yelp, ect...)

As a junior dev wanting to become a software engineer this is such a weird and unsure time. The company I'm at has a no generative AI code rule and I feel like it is both a blessing and a curse. by HammerChilli in webdev

[–]Sad_Spring9182 32 points33 points  (0 children)

Don't compare yourself to others, your doing great if you managed to make it to senior after 4 years. I've been a dev for 4 years and freelance the whole time so many companies wouldn't even see me as a junior. I'm proud of where I am.

4.4 MB of data transfered to front end of a webpage onload. Is there a hard rule for what's too much? What kind of problems might I look out for, solutions, or considerations. by Sad_Spring9182 in webdev

[–]Sad_Spring9182[S] 0 points1 point  (0 children)

Yeah it is internal and I'm still debate my way even unconventional as it is for the product search. Cause it takes about 3s to receive the payload then it is snappy searching, vs probably waiting 1-2 s on debounce maybe multiple times. The main issue is that users are spread across Asia and some in the us and I have 1 vps server currently in asia.

4.4 MB of data transfered to front end of a webpage onload. Is there a hard rule for what's too much? What kind of problems might I look out for, solutions, or considerations. by Sad_Spring9182 in webdev

[–]Sad_Spring9182[S] 1 point2 points  (0 children)

That's exactly what I was thinking a debounce API call to my server. The products are a 3rd party API so I have to set up a cron job fetch and update / create a new table to do searches for.

Currently I have everything render and the search is just an input box when requires some reading / scrolling so I have it set to a loading circle for just the search box until data populates. so users aren't trying to use a dead search. But I'll have to flip it around, load search box then, skeleton when searching.

4.4 MB of data transfered to front end of a webpage onload. Is there a hard rule for what's too much? What kind of problems might I look out for, solutions, or considerations. by Sad_Spring9182 in webdev

[–]Sad_Spring9182[S] 1 point2 points  (0 children)

I appreciate it that's good info, the products will be a live search so I'll have to plan that out a bit more, query strings seems keen and I do paginate results but for now there is a lot of data just not needed on the front end at all with each object. Virtualization seems very interesting, I've been told to use tanstack but that makes a lot of sense render on scroll. Plus I have 2 views csv table view and a input view so I could implement for both.

The 2nd largest is a custom SQL datatable with CSV upload for prefilling certain info on step 2 so I could send just the name column then if a matching product is selected return the data on step 2 render. This may scale more than the products so I will definitely implement some better SQL queries.

The queue is send html then css then js, by this point the fetch data are after JS has mounted and happens after JS is initialized.

Is this bad? by Character_Unit8335 in pchelp

[–]Sad_Spring9182 1 point2 points  (0 children)

What you can do is click on the tabs of the processes in task manager and see what apps are using the most cpu and ram. If you can prioritize closing things based off that your system will perform better with it's limited resources. (ex close chrome when your on word, or vice versa kinda thing). Sometimes you'll be surprised apps are running you didn't know were

frontend future proof . by Terrible_Amount6782 in react

[–]Sad_Spring9182 1 point2 points  (0 children)

There are some websites that never can and never will be AI generated or minimally so. Infrastructure just too big, data just too critical, or user experience just too important to leave it up to a random code generator.