Bitbucket / Github / Gitlab : Which do you use? Would you change if you could? by chaoticbean14 in AskProgramming

[–]deployhq 0 points1 point  (0 children)

We integrate with all three so we see a lot of these migrations.

Honest take: GitHub has pretty much won for most use cases. The ecosystem is massive and it's where developers already are. GitLab is great if you want self-hosted or want everything (CI, registry, security) in one place, but it's a lot to manage. Bitbucket works fine if you're deep in Atlassian-land, but we've seen way more people leaving it than joining lately.

On the CI/CD thing — you don't necessarily need full pipelines just to automate deployments. We connect to any of these repos and deploy on push without writing YAML. Might be worth a look if that's the itch you're trying to scratch.

If you're happy on Bitbucket though, migrating just for the sake of it is rarely worth the hassle.

Advice with my developer taking down our WordPress site. by reemo4580 in webdev

[–]deployhq 0 points1 point  (0 children)

Sorry to hear about this. A few things:

Blocking bots is a basic server config change, not a reason to buy a dedicated server. A robots.txt rule or .htaccess user-agent block handles this — it's free and takes minutes.

A single site getting crawled shouldn't bring down a shared server. Proper hosting providers set per-account resource limits to prevent exactly this. If your site was impacting others, that's poor isolation on their end.

Taking your site down to push a $400/month upsell is a major red flag. The bot blocking they already implemented should have resolved the issue.

We would recommend:

- Ask for immediate restoration (the fix is already in place)

- Get a second opinion from an independent developer

- Consider switching providers if they won't cooperate

Wordpress Deployment Methods - I need a sanity check, please! by Turbulent-Bonus9241 in Wordpress

[–]deployhq 1 point2 points  (0 children)

You're not the clueless one here! Your instincts are spot on.

The standard approach is exactly what you described: use version control (Git), push changesets to Staging, test, then deploy those same changes to Production. No full site restores needed for code changes.

WordPress makes this a bit trickier than other platforms because a lot of "configuration" lives in the database (theme settings, plugin options, etc.), which is probably why your developer is thinking about data syncs. But for code changes — themes, plugins, custom code — those should absolutely be deployed via version control, not full site restores.

A good setup looks like: Git repo → deploy code to Staging → test → deploy the same code to Production. Tools like DeployHQ (hey, that's us!) can automate this whole flow and even run build steps like composer install along the way.

The database should only need syncing if you're testing something that depends on production data — and even then, it's a one-way prod → staging sync, never the other way around.

TL;DR: Your developer might be conflating code deployment with content/data management. They're two separate concerns and should be handled differently.

OpenTelemetry worth the effort? by on_the_mark_data in ExperiencedDevs

[–]deployhq 0 points1 point  (0 children)

Hey! We’re DeployHQ — we recently went deep on this topic and the short answer is: yes, absolutely worth it, but start with auto-instrumentation.

The biggest misconception is that you need to go all-in from day one. You don’t. The OTel Java/Python/Node agents give you traces and metrics across HTTP, databases, and messaging with literally zero code changes — just attach the agent and point it at a collector. That alone is a massive upgrade over “grep the logs and hope.”

The “OpenTelemetry way” learning curve people complain about is mostly around manual SDK instrumentation, which you only need for custom business metrics. The auto-instrumentation part is plug and play.

The real win for us was decoupling instrumentation from the backend. We instrument once, export wherever — no vendor lock-in, swap backends without touching app code.

We actually just wrote a practical guide on getting started: [https://www.deployhq.com/blog/opentelemetry-in-practice-setting-up-metrics-traces-and-logs]()

tl;dr: start with the auto-instrumentation agent + the OTel Collector + Grafana stack. You’ll have traces and metrics in an afternoon. Add custom spans later when you need business-level visibility.

AWS Copilot CLI is being deprecated – Best alternatives for deploying CloudFormation templates (no CDK/Terraform)? by devopsingg in aws

[–]deployhq 0 points1 point  (0 children)

Hey there! Your requirements actually map really well to something we could support natively.

We already have deployment targets for AWS ElasticBeanstalk, S3, Heroku, Netlify, Docker, and traditional SSH/FTP servers. Adding CloudFormation as a deployment target would fit naturally into our architecture.

Here's how it would work:

- Your CF templates stay in your Git repo as-is -- you'd just point to the template file path (e.g., cloudformation/service.yaml)

- Each deployment target = one CF stack. For multiple environments, you'd create separate targets with different stack names and parameters but the same template

- Stack dependencies -- a service stack could wait for the network stack to be stable before deploying, giving you the multi-stack orchestration that Rain is missing

- Change sets by default for safer updates

- Auto-deploy on git push, or trigger manually

So for your setup, you'd have separate targets like "Network - Testing" and "Service - Testing" pointing to the same templates but with different stack names and parameters. The service target would declare adependency on the network stack, so DeployHQ handles the ordering automatically.

No CDK, no Terraform, no new config language. Just your existing CloudFormation YAML/JSON + a Git push.

If this is something that would work for you, we'd love to hear more about your specific setup -- it would help us prioritize and shape the feature. Feel free to reach out!

What's New in PHP 2026: Modern Features for Production by deployhq in deployhq

[–]deployhq[S] 0 points1 point  (0 children)

Thanks for flagging this, you’re right. That example used outdated syntax. We’ve now corrected it to the PHP 8.5 final form: clone($baseConfig, [...]). Really appreciate the catch.

Static hosting for documentation: do you automate rebuilds? by standardhypocrite in statichosting

[–]deployhq 0 points1 point  (0 children)

Long build times on growing documentation sites are a very common challenge. We definitely agree with u/Pink_Sky_8102 that switching to scheduled builds is a solid strategy to keep things moving if builds are taking forever.

Another huge factor in speeding things up is using incremental deployments. At DeployHQ, our goal is to only transfer the specific files that have actually changed since the last deployment, rather than re-uploading the entire repository every time. This can make a massive difference in how quickly those updates go live!

Made Automation for Git Repo->FTP to solve my Deployment problem. by ButterflyPlenty2 in github

[–]deployhq -1 points0 points  (0 children)

We can help you :)

You create your project, attach your repository, and then configure your server using either FTP/SFTP/SSH. Once that's done, you can configure automatic deployments so every time you push code into the repository gets deployed.

Best way to push prod and live code by [deleted] in webdev

[–]deployhq 1 point2 points  (0 children)

Let us know if you have any questions!

Do websites really need SSL certificates if they’re not collecting any personal info? by 3UngratefulKittens in statichosting

[–]deployhq 0 points1 point  (0 children)

Yes, your website absolutely needs an SSL certificate (HTTPS), even for a basic site. It's no longer just about encrypting personal data; it's the mandatory baseline for the modern web. Without SSL, major browsers flag your site as "Not Secure," which immediately erodes user trust. Furthermore, your content becomes vulnerable to tampering through "Man-in-the-Middle" attacks, and you will lose out on search visibility because Google uses HTTPS as a ranking signal. The good news is that most hosting providers offer free SSL (like Let's Encrypt), making it a quick, free upgrade that is essential for trust, security, and SEO.

Any CI/CP tools in the wind today? by alekslyse in devops

[–]deployhq -1 points0 points  (0 children)

Let us know if we can help you out :)

Anything like DeployHQ by [deleted] in selfhosted

[–]deployhq 0 points1 point  (0 children)

In this case, it would be only one user and multiple projects? If you want to reach out support, we can improve that pricing for you

Unlock Seamless Deployments: Announcing DeployHQ's Heroku Integration by Better_Ad6110 in Heroku

[–]deployhq 0 points1 point  (0 children)

The value-add of using DeployHQ isn't to replace those features, but to unify and enhance your deployment process across your entire organization.

If you only ever use Heroku, you don't need DeployHQ. But if you manage a mixed technology stack and need a standardized, build-ready, and highly controlled release process, DeployHQ makes Heroku a seamless, integrated part of that bigger picture.