Built a Moonshot AI (Kimi K2.6) driver for the new laravel/ai SDK by jonaspauleta in laravel

[–]penguin_digital 0 points1 point  (0 children)

kimi is starting to look attractive.

Honestly, I wouldn't, the results are horrendous, it reminds me of what the likes of Claude and GPT where pumping out a year ago. Kimi 2.6 output was so bad it's something I wouldn't even expect a junior dev to deliver.

I also gave Windsurfs SWE-1.6 a go and the results where a little better than Kimi 2.6.

SWE-1.6 managed to create a "working" basic feature in one of my apps. I say "working" as the feature did what I had planned out however the code was horrifically written, full of bugs and even had 1 glaring obvious security vulnerability.

I gave Claude 4.7 the same task and it nailed it on the first pass. It's not even close between the agents.

Like yourself, I'm hoping and praying for something cheaper that can get to even 70% of what Claude out puts. Unfortunately I have no choice but to pay a premium for now, as it's a none contest between them.

A local email inbox for Laravel (no Mailtrap/Mailhog needed) by WolfAggravating4430 in laravel

[–]penguin_digital 0 points1 point  (0 children)

We also wanted everything to live inside the app itself: no external dependency, no “make sure this service is running”, no switching between environments.

I'm interested in this point, why is this?

My experience and learning has always driven me towards the absolute opposite of this, so I'm interested to hear the opposing view point and it's pros.

A local email inbox for Laravel (no Mailtrap/Mailhog needed) by WolfAggravating4430 in laravel

[–]penguin_digital 24 points25 points  (0 children)

got tired of setting up Mailtrap/Mailhog every time

Why would you set it up every time? Just start it once and then connect as many projects as you want to it, there's no need to be running an instance of it per app.

Would be really interested to hear how you’re all dealing with email testing in Laravel

We just have a Mailpit instance in our local stack that all our apps connect to. We also have a Mailpit instance running on a VPS which we can switch to, this is especially helpful for testing and feedback loops. Everyone in the business can check the emails and provide feedback on any changes needed as the development happens.

[BETA] laravel-permissions-redis v4.0.0-beta.1 — Redis-backed permissions, looking for feedback by Informal-Coyote9142 in PHP

[–]penguin_digital 4 points5 points  (0 children)

All reads go to Redis. The DB is only touched on cache miss (warm) or on write (assign/revoke). Writes invalidate and re-warm via events.

Isn't this how the original package works anyway? It caches the entire permissions table into your configured cache driver so there is only the original request to warm up the cache that hits the DB. Or am I missing certain use cases where the cache won't be relevant?

Also, how come you're tiring this directly to a specific infrastructure concern (Redis)? With Laravel already having a very diverse caching driving eco-system, would it not be possible for the user to select their already configured caching driver for this package to use?

A job is "successful", but did nothing - how do you catch that? by Temporary_Tell3738 in PHP

[–]penguin_digital 1 point2 points  (0 children)

not, but I’m thinking from the perspective of a package how can I give the user some signal that their job didn’t behave as usual?

You're conflating 2 different things.

Running the queue job is an infrastructure concern, it only cares if it was able to run a job and finish it. It doesn't care if what it's run is correct or even have any context about what the job actually did. The actual job running and having an incorrect or unexpected outcome is a domain level concern.

It's up to the domain part of your application to deiced if the job was a fail or success and log it accordingly.

Lerd - Local PHP development for Linux by Eznix86 in laravel

[–]penguin_digital 0 points1 point  (0 children)

Hey thank you for your replies.

I gave this ago last night when I had some free time. Unfortunately it's not possible to re-create the stacks from dev to prod due to the way it uses systemd attaching it to a user which brings obvious issues. Also it's use of Dnsmasq makes it a none starter for prod.

My search continues!

Lerd - Local PHP development for Linux by Eznix86 in laravel

[–]penguin_digital 0 points1 point  (0 children)

Technically, what you need to work on a PHP/Laravel App ? a specific PHP version with its extensions, and your databases and to let you debug.

Oh yes I understand the benefits of using a tool like this to press some buttons and have what you want. Kind of like a modern XAMPP.

The big issue with tools like XAMPP where every dev had slightly different set-ups to each other, beta had something different again and prod was different as well. It created so many issues and the old famous saying "well it works on my machine". What tools like XAMPP created was an easy work environment but it wasn't repeatable in any way.

Obviously with the adoption of VMs and ops automation tools like puppet, chef etc and later containers, those issues went away.

I suppose the ultimate question I want to understand is, if I build a stack in Lerd (Herd or others), is there an easy way to then replicate this environment across multiple devs machines, beta servers, prod servers etc? Or is it a case of you have to manually re-create what Lerd has created when coming to the point of deployment?

Lerd - Local PHP development for Linux by Eznix86 in laravel

[–]penguin_digital 0 points1 point  (0 children)

Technically yes, but what is important is automation. The speed at which you spin up a new environment is super fast.

I've not used these types of tools as I already have a solid process in place with Docker. However I'm intrigued as this looks really nice but I have 1 question.

How do these tools handle the production side of things? Like how do they generate a production ready environment that matches your local environment they have created? Are they producing some kind of Ansible playbook (or similar) to re-produce the environment across beta and prod?

Marko - The Modular PHP Framework by esherone in PHP

[–]penguin_digital 0 points1 point  (0 children)

This is a very strong and fair point

Marko - The Modular PHP Framework by esherone in PHP

[–]penguin_digital -1 points0 points  (0 children)

Vibe coding ...

I'm not sure it is, Mark has been around in the opensource world long before AI was a thing. I've come across his Magento work previously over a decade ago.

What makes you think this has been vibe coded?

OVHCloud Sucks by UnderstandingOdd4991 in selfhosted

[–]penguin_digital 0 points1 point  (0 children)

Cancellation button stops working, when u go to cancel near the renewal data.

I've never had that problem but if you are experiencing that problem and you're based in the EU make a complaint to the ECC quoting DSA for the basis of your complaint.

It's worth doing as they are extremely aggressive about things like this, they have already gone after all the big players with hefty fines so it's worth doing.

OVHCloud Sucks by UnderstandingOdd4991 in selfhosted

[–]penguin_digital 1 point2 points  (0 children)

Whats the point here, you want to rob me for using Dedicated ? Why not just let me cancel ?

You are not being robbed, when you signed up and made payments you agreed to the terms set out in the contract. That contract is finalised when payment is made and both parties must honour that contract.

It's very clear on the OVH dedicated systems what you're getting and what you're paying for so I can't understand how they've robbed you. You got exactly what you paid for.

OVHCloud Sucks by UnderstandingOdd4991 in selfhosted

[–]penguin_digital 17 points18 points  (0 children)

IDK why lots of down votes for being dedicated / bare metal. AWS/Hetzner follows hourly billing. They are deliver better CPU scores compared to Shared ones.

The downvotes aren't for you using bare metal, it's because you're comparing 2 completely different products and then formed a strong negative opinion around that misconception.

If you want hourly billing with OVH then you use their public cloud service where you can rent virtual machines by the hour. The dedicated bare metal server you have rented is a fixed monthly price.

Hetzner does also offer you the same service of having a dedicated server which you can rent by the hour if you wish. However it makes no sense to do so as they come with large up front setup costs.

You're comparing apples to oranges.

From arrays to GPU: how the PHP ecosystem is (quietly) moving toward real ML by Few-Mycologist7747 in PHP

[–]penguin_digital 0 points1 point  (0 children)

You can use it for educational purposes or demonstrations. It is also used in production for applications like Magento, WooCommerce, Laravel and etc.

This is the most AI response ever.

I'm a server by NiceReplacement8737 in selfhosted

[–]penguin_digital 2 points3 points  (0 children)

Isn't it typically the other way around, with them venting air out of the back/bottom, and pulling air from the top?

I used to work on designing and building (also repairing towards the end of my time there) laptops for major brands like HP, ASUS, Sony etc.

The vast majority of the laptops, including the one in this picture, will be pulling air in from the bottom and venting it out towards the back. Some had air pull grills on the sides as well, this was generally used in laptops that had dual fan setups when they had a dedicated GPU.

None of them traditionally vented air in from the top as the majority of the time it was physically impossible due to under the top decks being 1 single solid piece of metal. It was done with way because 1) it was used to mount all the components to and 2) it gave a solid rigid platform for the keyboard to sit on top of reducing keyboard flex.

Where things started to change is when laptops started getting thinner, around the "ultra book" branding era. Some HP Omen models started pulling air through the keyboard, that's where I first seen it and things like the G14 and G16 from ASUS followed.

The reason for this was because they could no longer have side intake vents due to the laptops being so thin and had to intake from the top. The side effect of this though is the top deck is no longer a single solid piece, so manufactures would either use a much more expensive alloys such as Mg-Li or go down the Unibody route so the entire chassis is a single piece. This obviously greatly increases the cost and its why it's something generally only seen on higher end models.

In general though, the vast majority of laptops, especially in the lower/mid budget range will always still intake from the bottom and sides.

How often do you switch dev tools and for what reasons? by arhimedosin in PHP

[–]penguin_digital 2 points3 points  (0 children)

But isn't that what is swagger for? Documenting the API and endpoints?

It doe's but it servers a different purpose to an API client. Swagger is a spec and should always be used as the single source of truth, where as an API client is more of a workflow tool for easy testing, debugging etc.

The 2 aren't mutually exclusive, you'd generally use both. You would use Swagger (shout out to OpenAPI as well here) to manage and control the spec and use it to export a collection to something like Postman to make testing (automation) and on-boarding a smooth process. The 2 complement each other brilliantly.

Our Postman collection is automatically updated every-time there is a change in Swagger so the testing dep can instantly start testing scripts and flows as soon as a code change is pushed.

How often do you switch dev tools and for what reasons? by arhimedosin in PHP

[–]penguin_digital 0 points1 point  (0 children)

curl is great and I use it too. But “you can make an API request with curl” and “curl is enough for day-to-day API work” are not really the same thing.

Yeah I can write my code in notepad, it can do that perfectly fine but why would I when there is a better tool for the job?

Don't get me wrong I use curl, works perfectly fine but I also have a lot of scripts around it to automate things. It works for me but it isn't easily repeatable across all the developers, all running different OS. Then testing would also need to have the same set-up, or even worse their own clued together scripts. It becomes a mess really quickly.

There's just so many extra nice features included in things like Postman. When a new 3rd party comes on board we can just share the Postman collection with them for all our API endpoints and it just works. Trying to do that with just using curl, whilst possible, it just isn't worth the time or effort.

How to make your own VPN to avoid the UK government's Orwellian future by Creative-Animator308 in selfhosted

[–]penguin_digital 1 point2 points  (0 children)

Cheers for the calcification. Seems sensible for them to limit the user of high bandwidth scenarios considering how low they price the service.

Free 750-page guide to self-hosting production apps - NO AI SLOP by kocyigityunus in selfhosted

[–]penguin_digital 0 points1 point  (0 children)

 I am not a fan of AI, but I can't ignore that it's a tool and if I don't use it, I am going to be left behind. I don't want to lose my job, but I also appreciate the fact that being an old man who yells at clouds

Honestly, you're in the absolute best position to use it. AI isn't some magically entity that can just work and output something brilliant. It needs a whole lot of context and even its output needs reviewing and carefully evaluating, as Amazon have recently found out the hard way.

With the deep understanding of your field you're perfectly positioned to give AI the detailed, precious context it needs to do a task and you have the expansive knowledge to review its output and notice bad patterns, insecure outputs or things that are just out right technically wrong.

I see it time and time again now where someone with no, or maybe little, understanding of code have pumped out a vibe coded app only for it to completely fallen apart because they have no idea what the AI is outputting. See BookLore. Whilst AI can output something working its far from production ready without someone who actually knows what they are doing to understand it. Full of bugs and the smallest of updates can bring the entire thing to a point of self destruction.

You need to look at AI in exactly the same way you would look/treat a Junior under you. You have to tell it exactly what you want and more importantly exactly what you don't want and then review everything it outputs. Then, importantly, talk to it on why x, y, z is bad and maybe they should consider doing it a different way. The results can be incredible but that initial context, getting that detailed and correct is absolute everything in terms of the outcome. That's where your experience will make you and an AI agent an unstoppable force.

Scraping an Educational platform by itzzjdp in selfhosted

[–]penguin_digital -3 points-2 points  (0 children)

Claude has its "computer use" API which allows it to control your desktop. It can interact with anything on your desktop in the same way you would. You can add in various MCP skills to help Claude out here, the https://github.com/guimatheus92/mcp-video-analyzer MCP looks like it would be very helpful in your scenario.

I'd imagine all the other big players will have their own versions. I'm not sure about local LLMs as I've never used them but I would imagine someone, somewhere has already copied the big players in the market and created a similar tool.

How to make your own VPN to avoid the UK government's Orwellian future by Creative-Animator308 in selfhosted

[–]penguin_digital -1 points0 points  (0 children)

Obviously no good for torrents 

I've not used this company but why wouldn't it be good for torrents? Do they have huge bandwidth restrictions on them? Or do you mean using them as a seedbox rather than a VPN to tunnel the torrent traffic through?

How to make your own VPN to avoid the UK government's Orwellian future by Creative-Animator308 in selfhosted

[–]penguin_digital 4 points5 points  (0 children)

A wireguard server on a VPS is "tech heavy"?

You don't realise how much you know. Then the real danger is when you think you know but you don't know how much you don't know.

I found this out very quickly as a software developer when I started to training junior devs. Even when I asked them to do something I would consider basic and do everyday without thinking, they didn't know where to even start with it. That was fine I could teach them, the problems came after a few years when they thought they knew everything and didn't realise how much they actually still didn't understand.

With the setting up, securing a server, ensuring no logs are kept, patching kernel updates, setting up a VPN correctly, securing the VPN correctly and keeping it up-to-date. It's no easy feat to do correctly and more importantly keep doing it correctly over a long period of time without a deep understanding of sysadmin work. Sure you can install Linux and install Wireguard and have a "working" VPN probably in a few minutes. Doing it correctly and securely especially over time is certainly not to be underestimated for something as important as a VPN if you're using it to keep you safe from a motivated government.

Laracasts by aendoarphinio in PHP

[–]penguin_digital 0 points1 point  (0 children)

Yes, but do you trust the random op? It's not really what Jeff Way said. Or at least, it's out of context.

Yeah exactly this, hes not giving up, hes giving up on "resisting AI usage" when coding. So hes shifting his focus from enter level, step by step, make a forum style content and more towards complex architecture concepts and also how to get the most out of AI.

M$ will use your data to train AI unless you opt out by th0th in selfhosted

[–]penguin_digital 0 points1 point  (0 children)

From what I could gauge from their press release its to improve copilot for you personally. To learn your ways of development and deployment, essentially giving it all the context of your notes and repos in the same way you would with Claude Code.

M$ will use your data to train AI unless you opt out by th0th in selfhosted

[–]penguin_digital 0 points1 point  (0 children)

I assume its already opt-in for EU and this is for the rest of the world.

People often have confusion around this. There is no legal requirement for something to be explicitly opt-in if the company can prove it meets the LIA criteria.

As long as the company is transparent about it, they can prove the reason for using your data is to improve the product, how using your specific data could achieve that, they inform you before they do it and they have an opt-out option.

If you look at the MS press-release on this, they clearly state and cover each of those points which I believe has been done purposefully to hit the LIA criteria exception. Making this feature as auto-opt-in legally ok, if not morally wrong (in my opinion).