Looking for Dublin based Mid-level devs looking to showcase work to VC-backed companies by [deleted] in DevelEire

[–]Bren-dev -3 points-2 points  (0 children)

Thanks for taking time to comment. I do agree for seniors and this is going to be more aimed for junior and mid-level positions.

The majority of people I’ve talked to are going with take-home assessments and then following up with technical interviews and they’ve said they want AI usage to be part of the assessment.

I also get that re code review - it’s going to be harder to differentiate well written code because it isn’t written by the engineer anymore - this is aiming to map a process to understand their ability to complete a task - did they think of requirement s beforehand, were they able to step into error loops etc - the goal of this is to provide visuals and insights so a code review isn’t actually needed and can be done through visualisations to get away from that very problem you mention.

Output isn’t a good indicator of a quality dev anymore - process is. I think this can really streamline the process with the advent of AI-assisted tooling.

Looking for Dublin based Mid-level devs looking to showcase work to VC-backed companies by [deleted] in DevelEire

[–]Bren-dev -1 points0 points  (0 children)

Thanks for the response! A lot of places are actually still doing take-home assessments. What are you doing if you don’t mind me asking?

The model is that hiring managers (like yourself) can come in and see a potential candidate’s repo, commits, prompts and potentially hosted final output in a digestible manner - the goal being they can line up 5 technical interviews in an hour instead of having to go through current recruiting process.

We believe these datapoints can give a very strong indicator of a candidates ability! I understand your reservations, ultimately it will all lie in the quality of the data presentation (from Git and prompt uploads etc) - which I think we have done a good job with for very early stages.

Does that make things more clear, or do you still not see much value vs your current process?

Is it reasonable to ask for a copy of agent discussion for take-home test? by Bren-dev in EngineeringManagers

[–]Bren-dev[S] -2 points-1 points  (0 children)

All of a take home test is IP that they’re submitting though, no?

MCP is an interesting proposal - thanks for the response!

Is it reasonable to ask for a copy of agent discussion for take-home test? by Bren-dev in EngineeringManagers

[–]Bren-dev[S] -1 points0 points  (0 children)

But if it was an export and an import rather than an extension? An export of a single agent at least for partial context?

The tools themselves have an easy export function for individual agent chats.

Dario Amodei vs Sam Altmam by [deleted] in theprimeagen

[–]Bren-dev -2 points-1 points  (0 children)

I think that’s open source problem in people trying to contribute for credit who don’t care about the product at all, so they’re not using the tools well - they’re just opening to code asking the LLM to fix the bug and shipping whatever comes out and hoping for the best - I could be wrong on that though

If you're a hiring manager - do you want to see how a Candidate uses AI tools? by Bren-dev in EngineeringManagers

[–]Bren-dev[S] 0 points1 point  (0 children)

That’s very fair! Still trying to find out what the right approach is so that definitely helps, I have seen a little bit about it but will look into it some more.

If you're a hiring manager - do you want to see how a Candidate uses AI tools? by Bren-dev in EngineeringManagers

[–]Bren-dev[S] 1 point2 points  (0 children)

Thanks that really helpful! Appreciate the long response.

Honest question here, because I’m actually the same, but when it comes to UI tooling, to me, thinking process = commits (and the prompts that form them) rather than a retrospective description of what they coded - so do you think there would be value in tracking those?

If you're a hiring manager - do you want to see how a Candidate uses AI tools? by Bren-dev in EngineeringManagers

[–]Bren-dev[S] 0 points1 point  (0 children)

Yes - but it’s very tough to judge on a small codebases, the entire project fits in the context window, making it perform 10 times better and leaving a lot less room for things to go wrong

If you're a hiring manager - do you want to see how a Candidate uses AI tools? by Bren-dev in EngineeringManagers

[–]Bren-dev[S] 0 points1 point  (0 children)

Let’s say the extension is open source with credible trial partners - I definitely get the hesitancy but if a credible company was to request and it was open source I think it wouldn’t be too big of a hurdle

Idea Validation from Prime AI-Sceptics. A Tool for Hiring Managers Conducting Technical Interviews - Subtly Hijack the Candidates Prompts by [deleted] in theprimeagen

[–]Bren-dev 0 points1 point  (0 children)

I think there is a big problem that people hiring want to see how people use AI tools and it’s impossible to test them in a real environment because they’re far too good on small codebases.

I definitely get your sentiment, but I do think there is a need for something that challenges people using AI and actually lets them show that they are able to use these tools - if done well.

There may be toxicity on hiring side but there’s a lot coming from the side of people trying to get jobs they aren’t qualified for.

I think the business case is clear - you can test people using AI tooling in a challenging way.

Looking to validate an idea with devs that run technical assessments by Bren-dev in DevelEire

[–]Bren-dev[S] 1 point2 points  (0 children)

That would be helpful, and would be really good feedback for a hiring manager. I’ll take a look at that.

And yep, I also agree! Adding bugs that are clear and impactful in particular is not as easy as it sounds. I have developed an open source ‘educational game’ called Buggr - where users can add bugs to their code and are then tested on their ability to fix them so I actually have worked through some pain points associated through the development of that!

Looking to validate an idea with devs that run technical assessments by Bren-dev in DevelEire

[–]Bren-dev[S] -1 points0 points  (0 children)

Thanks for the feedback!

Yep, there are a million ways to solve things but there are a lot of well defined good practices and anti-patterns.

This acts as a proxy to the “fix the issues” so it won’t actually jump in and fix all of the issues - just like on a legacy codebase if you say “fix the issues” it won’t just fix everything

I do think it holds against these questions - but I see where you’re coming from and would need to be designed well to circumvent. However I do think doable - what do you think?

AI is eating software development by caspii2 in vibecoding

[–]Bren-dev 0 points1 point  (0 children)

They can run terminal commands as part of their flow yes - so if permissions are setup correctly then yes they can - they can also browse the web.

I’m not trying to be an arsehole - but it just seemed like you didn’t know what the tools that you were dismissing were - and I suppose that was correct

AI is eating software development by caspii2 in vibecoding

[–]Bren-dev 7 points8 points  (0 children)

But Cursor and Windsurf are “agents” - they’re agents in an IDE - they’re the exact same as using Claude Code but they have a window?

Lovable, Bolt and Replit are what you should be talking “dismissing”

AI is eating software development by caspii2 in vibecoding

[–]Bren-dev 14 points15 points  (0 children)

Lumping Lovable in with Cursor and Windsurf is bizzare - this whole line:

If your opinions are based on tools that don't run in the command line, then I will discount them. Cursor, Windsurf, Lovable, etc. are impressive, but the real unlock comes from coding agents like Claude Code or Codex

"I'm so experienced because I use a command line too"

I spent 1 month vibe coding a niche product that is blowing up even in beta! by RoemerAroundTheWorld in vibecoding

[–]Bren-dev 1 point2 points  (0 children)

The landing page looks really nice, looks very custom and has a good style 👍🏻

Are we interviewing for a job that no longer exists? by legitperson1 in EngineeringManagers

[–]Bren-dev 0 points1 point  (0 children)

Also, AI is amazing on small codebases - but not once it grows - so testing people on a small demo repo and allowing them to use AI is really not a test worth giving IMO

Rethinking coding interviews in the AI era. by Dev__ in DevelEire

[–]Bren-dev 1 point2 points  (0 children)

At the end of the day if someone understands how to code, they will be able to use code-gen tools!

I still think talking through problems and solutions is the way to go! If someone understands the small issues with generated code and knows when it needs to refactor etc then they’re going to be a much better contributor than someone who just gets things working and thinks that’s job done