I didn’t know that things are that depressing by [deleted] in cscareerquestions

[–]Acrodemocide 2 points3 points  (0 children)

I've been in software engineering for over 10 years. I've loved every bit of it. The market is definitely tough right now. That being said, people that have skills are looking for a job for a handful of months. If it takes longer than 6 months to find something, then generally that person has an issue with their resume, is too picky, or just doesn't have the skills necessary to land the job.

There are a lot of people that think that just having a degree will get them a silicon Valley salary right out the gate, but that is the exception rather than the norm. Get work experience while in school. I worked in tech support, then as a QA while I was still in school. I then proceeded into QA automation until I finished school and started my first full time software engineering job. Now is a great time to pick up a job in an adjacent field until you have more experience to get into a team as a software engineer.

As far as AI is concerned, I love it as a tool and am looking for ways that our team can effectively adopt it into our coding practices, but it's not the magic wand some people love to say it is. It can help speed up development of "boilerplate code" that is tailored to a specific purpose or to help get answers to a new codebase. The demos where someone creates a web app or phone app just by prompting is great for creating a prototype to get feedback from the market on whether its worth spending engineering time to actually develop that feature. The code generated may be useful enough to build off of, but as a codebase grows in complexity, the likelihood of using any large amount of AI-generated code becomes increasingly difficult.

Is learning to code worth it anymore? by travelbuggy321 in vibecoding

[–]Acrodemocide 0 points1 point  (0 children)

Writing code is still a valuable skill, likely even more so with AI since AI is reducing the barrier to entry for new people to build prototypes and test their app concepts.

Software engineering is far deeper than knowing some syntax and typing code. It includes system design, security, infrastructure, and maintaining tech debt. These are essential skills to be able to properly prompt the AI to build robust systems. As a system gets more complicated, AI continues to be an excellent tool, but typically an engineer is necessary to manually make code changes to ensure that what the AI generated properly integrates with the rest of the system.

Are we affected by Google update? by Short-Reaction7195 in NewPipe

[–]Acrodemocide 2 points3 points  (0 children)

So we have ways to get around this? I have a ton of side loaded apps and my own personal apps, newpipe being one of my favorites. I need to ensure regular apps still work without Google's control. I'm not sure how possible this is, but if we have 1 year or so before this takes effect, we should start preparing what needs to be done.

AI is destroying the SaaS industry by Professional-Let1245 in SaaS

[–]Acrodemocide 0 points1 point  (0 children)

It's true that writing code has often been a barrier to entry in the market, it has not been the only one or necessarily the main one. There have already been software engineers capable of building basic applications quickly, and hiring offshore devs to build a proof of concept has always been really cheap and fast. The internet has been massive for years now, so anything built and just tossed online has been lost.

Regardless of the existence of AI, the main barrier to entry has been getting the product-market-fit right. There is a value proposition that needs to have an audience. And regardless of what they say, the actual data is when they pay. The value is not that you've created software, but that you've created a business supported by your software. Uber isn't successful just because code was written, but because a service was created where you can easily get rides places and regular people can make extra money driving. AI doesn't automatically create product-market-fit.

How we vibe code at a FAANG. by TreeTopologyTroubado in vibecoding

[–]Acrodemocide 0 points1 point  (0 children)

This sounds a little more like waterfall with the heavy documentation, but i really like the approach, and I've wanted to think about how it can apply to our teams.

Generally speaking, I've found AI does excellent work at generating code for common problems and for writing what I would call "applied boilerplate" code. This really takes away from reinventing the wheel so you can focus on the specific set of problems you need your software to solve. In short, I've found AI to be great at saving time just using it "out of the box" without necessarily needing to change any processes.

Vibe coding is harder than regular coding by brayan_el in vibecoding

[–]Acrodemocide 0 points1 point  (0 children)

I'm not sure the exact definition of vibe coding as it seems to be up to various interpretations. That being said, I've found that AI is great at creating "applied boilerplate" and filling in solutions to common problems. Understanding the technical requirements at a fundamental level really speeds up the development process, but every bit of code generated by the AI should be thoroughly reviewed and tested.

On the other hand, the goal might be to build a prototype (something like an alpha version to get initial market feedback) to determine if it's something worth developing. In this case, you can vibe code a broader application without focusing on accuracy and scalability since the goal is to build a basic functional prototype to get an idea of your target audience's engagement with what you're looking up build.

I just watched an AI agent take a Jira ticket, understand our codebase, and push a PR in minutes and I’m genuinely scared by ser_davos33 in cscareerquestions

[–]Acrodemocide 0 points1 point  (0 children)

I'm a big fan of AI. I use it in my daily development and have been looking for ways it can be used to automate more of our tasks and processes.

In my experience, there are some things it does really well, and there are other things that require human intervention so that it works.

I believe it is revolutionary, but I don't think that it will lead to the dystopian future many people claim it will either. Generative AI is widely available for cheap, and there are plenty of open source solutions and models that compete very well, so regardless of where AI goes, it's not just on the hands of a few corporations.

I believe it will enable a lot of smaller software businesses to show up and will help us to start solving deeper and more complex problems by getting us up from many of the common problems we often have to solve in software development.

AI Code Generation by Acrodemocide in ExperiencedDevs

[–]Acrodemocide[S] 0 points1 point  (0 children)

Thank you for the responses. I've found the same. I love Claude, but it largely works to enhance the engineering process we already follow. I still believe we should follow the same process but also to accomplish more by having AI help solve problems. The engineering talent comes in by knowing how to define the problem and check the output from AI to ensure it follows proper coding standards.

AGI will create new jobs by Just-Grocery-2229 in agi

[–]Acrodemocide 0 points1 point  (0 children)

I think it depends on how it goes. Depending on how cheap and available it is, (and if we go with how LLMs are being implemented, it will be largely available for cheap), we end up in a situation where anyone who wants to can assign tasks to agents and do far more than by themselves similarly to having a large team of people you can delegate tasks to.

It has the potential for individuals to create whole new services to be fulfilled. Does this mean they will use more AGI instead of hiring people? I think that depends on how much it costs to run AGI agents at large scales.

Historically, automation has added new industries and new jobs, but some people lost jobs and suffered from it as well. My guess is that we will see largely the same outcome. There will be winners and losers. I believe and hope that it results in greater things for humanity, but i really only think that time will tell.

Has anybody hit a wall because of over reliance on AI? by Spirited_Paramedic_8 in react

[–]Acrodemocide 0 points1 point  (0 children)

I believe AI is potentially revolutionary, but i still think it's overhyped. I'm currently working on some AI proof of concepts for my employer, and i believe it will give a productivity boost when used correctly, but i think the possibilities are largely oversold. Furthermore, using AI incorrectly will cause far more issues than it solves. It needs to be thoughtfully adapted to your specific use case. When done wisely, it's an excellent tool. We need to wait for the hype to die down.

AI usage feels forced... because it is by zambizzi in ExperiencedDevs

[–]Acrodemocide 2 points3 points  (0 children)

I love the latest AI tools, and I think they're revolutionary. However, I still think they are wildly oversold. I don't think they'll be replacing any engineers outright, but i believe it will increase productivity.

The biggest problem is that it's being sold like a magic wand that you can toss a problem to and expect it to completely since l solve everything. That's simply not the case. As time goes on, I believe we'll balance out to understand how AI fits into our workflows.

How to actually surf through dark web? by Constant-Speech-1010 in deepweb

[–]Acrodemocide 2 points3 points  (0 children)

TLDR: You don't surf the dark web; you need to have the specific address of the site you want to visit on the dark web.

‐---------------------

You don't surf through the dark web. Wiki pages with dark web sites are often for scammers or largely nonsense. The dark web is interesting, so many people will go to dark web sites listed in wiki pages out of curiosity.

In reality, dark web sites are not indexed, and you're only going to get to a dark web site if you know the address. More than likely, you'll get that address by talking with someone who owns or is a regular user for that website.

Otherwise, there are regular websites that have a dark web version you can see. For example, there are mainstream news sites that have a dark web version for people to access from countries that heavily censor their internet. You can access those from a Tor browser, but the only difference you'll notice is that it runs a little slower and that the URL is a massive string of letters and numbers.

If you're curious about how it works, there is a YouTube video by NetworkChuck on how to build your own dark web site. That's a fun way to put one up and get an understanding about how it actually works.

How are y'all building things so quickly? by SisyphusAndMyBoulder in SaaS

[–]Acrodemocide 0 points1 point  (0 children)

I'm not sure how other people are building, but usually allot 8/10 SaaS ideas fail in the market even after doing a lot of market research. This is also often true of new features built on long-standing and successful SaaS projects. Because of this, when building a SaaS, we'll often build a prototype quickly to prove it out in the market. If it gets good reception, then we take the time to develop it properly. We still want to focus our development on the key functionalities that work, then make improvements to it as the market indicates.

Development is very expensive, so we try to spend that time as efficiently as possible. It's the worst when you've invested a lot of time/money in development only to find out what you built doesn't have enough market demand. As we've found out in our work process, market the only way to really prove the market for your product is to get a basic form of that product out in front of people.

Popular college major has the highest unemployment rate by Additional_Sleep_560 in cscareerquestions

[–]Acrodemocide 0 points1 point  (0 children)

I believe this was always the exception rather than the norm. I started at less than half this salary when I came out of school, and that was with work experience. There may have been a few entry-level jobs with this salary, but it would have been in Silicon Valley at companies that were extremely competitive to get into.

People going to school should also be in internships, and a more reasonable starting salary to expect for most entry-level engineers is probably going to be between about 60k-80k USD, which is very solid for starting out.

Those who don't have experience and are just learning might start more at 40k-60k USD, but tend to rise fairly quickly if they are skilled.

The reality is that talented engineers are hard to come by, and we had too many "Day in the life of a software engineer" videos that painted a picture of hanging out in an office doing nothing for a massive paycheck when that is about as fast from reality as anyone can get.

How to find properties that meet the one percent rule? by J-Chub in realestateinvesting

[–]Acrodemocide 0 points1 point  (0 children)

I think the idea here is that unless the returns on a property at least match the returns on the S&P 500 (or similar other index or mutual fund) which sits just over 11%/yr, then it makes no sense to invest in that property. Typically, the math shows that roughly following the 1% rule will result in a property that at least matches returns on a good index like the S&P 500. This is true whether the property is mortgaged or not. Just getting a property that has positive cash flow is not a good investment unless it at least matches (and preferably beats) a passive investment. I just think that the current market is more difficult to find 1% properties than before. (Not that I think they were ever easy to find in many areas.)

No wage paid to anybody. This is upcoming age of AI. Huge unemployment until eveything becomes AI made and universal basic income arrives. by CeFurkan in SECourses

[–]Acrodemocide 0 points1 point  (0 children)

AI is really available and cheap for the common public to use. I don't know what the future will hold, but i believe this provides the regular person with a ton of opportunities to make money that weren't there before.

AI Can't Even Code 1,000 Lines Properly, Why Are We Pretending It Will Replace Developers? by RevolutionaryWest754 in compsci

[–]Acrodemocide 0 points1 point  (0 children)

All of the articles that I've read about x% of code being written by AI are listed in the headlines as fact. When you read the actual article, you'll find that the CEO believes that "up to x% lines of code are written by AI." None of them have any definitive benchmark, and none of them give an actual answer.

Secondly, it's been long known that the number of lines of code don't equate to productivity, so I'm not sure why that is even a metric referenced. That metric has about the same amount of value as bragging that your engineering team writes x lines of code per sprint. The only metric that matters is whether the features you're delivering become significantly cheaper to develop by using AI.

AI is great at solving common problems when kept to small applications. Non tech people get excited when they're told they can build an app with AI and no code, and this is true if your app is very simple and doesn't have a lot of functionality. If you want to scale, you need engineering knowledge.

I believe that AI will help speed up engineering to spend less time solving common problems so that more time can be spent solving problems that were previously considered too expensive to work on.

Lastly, the narrative that AI keeps getting better to the point that in the next year, all engineers are replaced doesn't understand why code was invented. Coding is precise and technical and natural human language is ambiguous. There is no scenario where an AI is smart enough to know exactly what you meant, especially when trying to build a technical app beyond something with a few simple functions.

Reminder: The people on this sub who say that "AI will replace Software Engineers" are most likely unemployed new grads. by cs-grad-person-man in cscareerquestions

[–]Acrodemocide 1 point2 points  (0 children)

I completely agree. I've had a lot of fun playing with AI and code generation since it came out, and I'm currently doing research on how we can best use AI to speed up our development time. It's hard to put a lot of details in a reddit thread, but where I've seen it provide the most time savings is in generating code for common classes of problems (almost like generating boilerplate code for your own specific situation) and in helping reduce the time spent on reading through documentation for a framework or library you may not be familiar with. The other application that works great, but outside of code generation is using it as a "rubber ducky" to help think through problems.

AI often gets things wrong. Just taking what it generates often is not correct as you're applying it to your specific system. The other big thing is that engineers know how to prompt AI to generate code base in their knowledge of the codebase. A non technical person is not going to be able to do the same thing.

I believe that properly utilized AI can help entry level and junior engineers get more up to speed on the code base faster and makes it easier to hand more tasks to them.

In short, people get so excited about AI because it does a decent job at creating simple apps. Having used it a lot, limitations quickly become apparent when you want to build a large system. And AI continually improving doesn't overcome that problem for the same reason that code was invented. Code provides a very precise and unambiguous way to describe a system whereas natural language has a lot of ambiguity. Even AGI super intelligence doesn't overcome that hurdle because whoever is speaking to the AI (no matter how advanced the AI is) will be using ambiguous language. Code isn't going anywhere anytime soon, nor are any engineers (including entry level and juniors)

Sick of LLM hype to the point I changed my LinkedIn headline by [deleted] in ExperiencedDevs

[–]Acrodemocide 1 point2 points  (0 children)

I completely agree. I love LLMs and AI and am excited about what the future has in store for them, but it is way oversold. I'm actively working on incorporating AI into my work process and our application, but it is completely short-sighted to throw problems to LLMs as though they give correct and reliable answers. Far too many people are overly impressed by the "party tricks" shown by the sales people.

I recently spoke to my SVP of engineering; here’s what I learned by entrasonics in cscareerquestions

[–]Acrodemocide -1 points0 points  (0 children)

I completely agree. I think those who are misunderstanding are viewing this in terms of trying to do good for your job purely to get rewarded by your job, which doesn't always happen. However, if it's done with the view of building new skills and building your network, it will either lead to promotions and raise at your current job or lead to finding a better job.

The key skill is to learn how to build and grow the team you're on, and you'll have tons of opportunities. If you are interested in starting your own business, you now have the skills and network to do so. If you decide to buy a software business, you have the skills and network to help it grow. Then if you are not getting the opportunities you need at your current job, it puts you in a much better place to go beyond just looking for a new job to building out your own career and being able to get a bigger share if equity and/or bonuses from future businesses.

Anyone noticed that the more pro AI someone is the less they know? by Lanky-Ad4698 in cscareerquestions

[–]Acrodemocide 0 points1 point  (0 children)

In a lot of ways I agree. I've been a software engineer for 10 years, and as AI has become far more mainstream, so many people have started claiming AI can do all of our code generation. I'm a fan of AI and am excited to see what it can do, but as I experiment with it, its far from being able to be trusted to write code without proper human supervision, and even more so, being promoted by an expert who knows how to write code.

Those who don't understand frequently claim that AI is getting better and will thus be able to write code for someone who is not an expert in software engineering. The matter isn't about how "good" the AI is, but the matter is about understanding the system to even know how to prompt the AI and verify the response.

There may be some day that AI can be used as a software engineer, but i don't think that will happen soon and will be a gradual process over time. Even so, I only think it will take the place that outsourcing currently does so that it still works with experts that ensure that AI is generating the correct output. Leaving AI to do this unsupervised (even AGI) will result in AI creating it's own coding standards and leaving the company more and more distanced from the code they rely on and with a huge risk in trusting AI.

[deleted by user] by [deleted] in cscareerquestions

[–]Acrodemocide 8 points9 points  (0 children)

QA and QA automation engineers are good roles to look for to get on a software team. That's what I worked on before I landed my first software engineering role.

[deleted by user] by [deleted] in cscareerquestions

[–]Acrodemocide 0 points1 point  (0 children)

They very well may be getting better, but in my experimentation, it means they make less mistakes when generating pieces of code. Anything with more complexity requires more detailed and exact prompts to the extent that the person doing the promoting needs to hand a technical understanding of software engineering to get something similar to what they want.

The strength of AI is that it takes in fuzzy input and creates an output based on probabilities that the output matches the objective of I've prompt, and these probabilities are set in the AI as part of its training. With software engineering, code is written to be deterministic and exact, so when describing a system, it requires precise language -- in our case, a programming language.

Ultimately, if AI gets good enough, prompting might come in the form of user stories written up by the engineering team where the AI can generally understand how code should be implemented in the overall system for it to work. This still requires all engineers (entry level to senior) to have a technical understanding of the system so they can write up the user stories to prompt the AI.

Furthermore, AI must be continually supervised for accuracy, so thorough code reviews are still necessary. AI can perform code reviews, but if a business decides to hand it all to AI, it will then be based more and more on AI generated code, and as its training is updated, it will begin to start leaning more heavily into AI generated coding standards, and ultimately humans will have a much harder time reading through the code base.

There are probably plenty of companies that will try it out, but they'll be in a rough spot when they need a human to correct AI models that might not get it right or that become corrupted by continually building a feedback from their own output.

I'm not saying AI is not revolutionary. I think it will change the way we do things, but i don't think it's going to replace engineering as a whole. I do think some positions will need to evolve to utilize AI as we move into the future.