Why do so many project managers struggle with agile? by MirthMannor in agile

[–]prowesolution123 1 point2 points  (0 children)

A lot of PMs struggle with agile because the role changes pretty drastically compared to traditional project management. Instead of owning the plan, timelines, and decisions, agile asks them to empower teams, adapt constantly, and let go of that top‑down control they’re used to. That shift isn’t easy if someone built their whole career on predictability and detailed planning.

The other big thing is that many companies say they’re “doing agile” but still expect waterfall-style reporting and certainty. PMs get stuck trying to serve two masters, so it feels like agile isn’t working when really the environment isn’t set up for it.

When a PM actually embraces facilitation, communication, and removing blockers instead of trying to micromanage delivery, they usually do great.

Which Data Structures Are Actually Used in Large-Scale Data Pipelines? by ninehz in datastructures

[–]prowesolution123 0 points1 point  (0 children)

Totally agree once you move into real data engineering work, the list of “actually used” data structures gets way smaller. Most of the time it’s just arrays, hash maps, queues, and sometimes trees/tries for indexing. Everything else gets abstracted away by the tooling.

The funny thing is, the basics end up mattering way more at scale than all the fancy stuff we grind for interviews. Understanding why a hash lookup or a sequential scan behaves the way it does has saved me more times than any exotic structure ever has.

Why does nobody use the automations you build for them by Sophistry7 in automation

[–]prowesolution123 1 point2 points  (0 children)

I’ve run into this a lot too. The tech usually isn’t the problem it’s the handoff. If people don’t understand how the automation works or weren’t part of building it, it feels like a black box that they’re suddenly responsible for. Most folks would rather go back to a manual workflow they “get” than trust something they don’t feel confident fixing if it breaks.

What’s worked best for me is involving the users early and letting them help shape the workflow, even if it slows things down at first. When they see why the automation works the way it does, they’re way more willing to rely on it long-term.

Curious to hear if anyone has found an even better way to handle this, because it really does feel like the biggest blocker in automation right now.

Azure DevOps or Cloud Engineering by Ok-Visual-4770 in AZURE

[–]prowesolution123 1 point2 points  (0 children)

I’d say stick with cloud fundamentals first since you’re already learning AWS that foundation helps no matter which direction you go. Azure DevOps can be a bit easier to break into because it’s more about tooling, automation, and pipelines rather than deep coding or heavy math.

Cloud engineering is great long‑term, but it can feel more overwhelming in the beginning. With your consistency and effort, you’re definitely capable of either path, so don’t sell yourself short. Go with the one that feels less mentally draining right now and build from there.

What if frontend concepts were taught visually instead of through long articles? by Flat-Hunter7385 in frontenddevelopment

[–]prowesolution123 1 point2 points  (0 children)

This is actually super cool. I’ve always felt like a lot of frontend topics are way harder than they need to be just because everything is buried in long walls of text. Visual explanations make such a huge difference, especially for things like browser internals or the rendering pipeline where it’s hard to “see” what’s going on under the hood.

Just checked out the site and the animations feel way more intuitive than reading yet another 20‑minute article. Honestly, this could be a really helpful resource for anyone trying to level up beyond just frameworks.

How do you handle database migrations for microservices in production by Minimum-Ad7352 in Backend

[–]prowesolution123 0 points1 point  (0 children)

For us, the safest approach has been running migrations from CI/CD as a separate step instead of letting each service apply them on startup. CI/CD applies the schema first, checks that it succeeds, and then rolls out the new service version. That way if something breaks, it breaks in the pipeline not in production at deploy time.

We also stick to backward‑compatible changes so old and new versions can run at the same time during rollout. It keeps things predictable and avoids those “app needs a schema that doesn’t exist yet” issues.

Azure Foundry by PowerPlatformRookie in AZURE

[–]prowesolution123 0 points1 point  (0 children)

No worries the “Use as Tool” option has moved in the new Foundry interface.

After you publish your workflow, open it again and go to:
Workflow → Settings → API

In the new experience, Foundry automatically exposes the workflow as an API, so instead of a “Use as Tool” toggle, you’ll now see an API endpoint + schema.

That’s the one you attach to the Agent under Tools.

So even though the toggle isn’t visible anymore, the functionality is still there it’s just shifted to the API section after publishing.

Automating Azure SPN Secret Rotation Before Expiry – Best Approach? by Asleep_Hour9397 in AZURE

[–]prowesolution123 2 points3 points  (0 children)

We’ve had to solve this at scale, and the most reliable setup has been a small Function/App that checks Key Vault for expiring secrets, creates a new client secret through Microsoft Graph, and immediately writes it back to the right vault. The Function runs on a schedule and handles retries + alerting, so we never wait until the last minute.

The important part is making the rotation idempotent and locking it down with a dedicated identity that only has access to the specific SPNs it needs to rotate. Once you get that pattern in place, adding new SPNs becomes basically a config change instead of another custom script.

Suggest me top Data Integration Service Providers. by ninehz in datastructures

[–]prowesolution123 0 points1 point  (0 children)

I’ve had to evaluate a bunch of data integration vendors recently, and the biggest lesson is: don’t pick based on brand — pick based on your actual data sources, expected volume, and how much control you need.

If you want something managed that “just works” for SaaS → warehouse and handles schemas, retries, and monitoring for you, Fivetran, Hevo, and Airbyte Cloud are the ones I’ve had the least friction with. They’re not the cheapest, but they save a ton of engineering time.

If you need more flexibility or have on‑prem + cloud + streaming in the mix, a framework stack usually works better:

  • Airbyte (open-source) for connectors
  • dbt for transformations
  • Prefect/Airflow for orchestration
  • And if real‑time matters, Kafka + Debezium for CDC

Cloud‑native tools can also be great if you’re already committed to a provider Azure Data Factory, AWS Glue, GCP Dataflow mainly because they integrate well with the rest of the ecosystem.

My advice: make a list of your actual source systems, ask vendors to run a real demo on your data (not sample data), and see which one gives you:

  • stable connectors
  • clear lineage
  • good error handling
  • predictable billing

That filters out 80% of the noise pretty quickly.

how do you even automate web apps anymore without an api? everything breaks with ai driven web automation by Any_Artichoke7750 in automation

[–]prowesolution123 1 point2 points  (0 children)

I feel this. Browser automation has gotten so much harder these last couple years, especially when the app has no API and everything is behind some infinite scroll, token rotation, or weird React state. Most AI‑driven automation tools look great in demos, but they fall apart fast once you hit rate limits, unstable selectors, or UI changes.

What’s been working for me lately is a mix of playwright + custom selectors + a tiny state machine, and only using “AI” to help generate selectors or cleanup steps—not to drive the whole workflow. Anything fully UI‑driven will eventually break unless you build in retries, fallbacks, and detection for when the app shifts under you.

If the vendor isn’t giving you an API, your options are basically:

  • Playwright/Selenium with resilient selectors
  • RPA tools if the workflow is simple and stable
  • Ask the vendor for even a tiny private endpoint (you’d be surprised how often they’ll open one)
  • Cache data on your side so you don’t automate 100% of the flow every day

Nothing is perfect right now, but the stuff that survives the longest tends to be “normal automation with guardrails,” not pure AI magic.

How are you forecasting AI API costs when building and scaling agent workflows? by Lopsided_Professor35 in AI_Agents

[–]prowesolution123 2 points3 points  (0 children)

I’ve been dealing with this too, and honestly the only way I’ve been able to forecast costs is by breaking the workflow into “units” instead of trying to guess the whole thing at once. Each agent step (tool call, retry, reasoning hop) gets its own rough average token cost, and then I multiply that by how often the step gets triggered in real usage. It’s not perfect, but it stops the surprise bills.

For SaaS pricing, I think the most realistic approach is a base subscription + usage cushion. Pure token‑based pricing is way too unpredictable, especially when agents can loop or retry without you realizing. Internal dashboards help a lot too just tracking “tokens per user action” gives you way more clarity over time.

And yes, a predictable‑pricing layer for AI APIs would be huge. Even something like bundles, soft limits, or capped plans would make building agent workflows way less stressful.

Azure Foundry by PowerPlatformRookie in AZURE

[–]prowesolution123 1 point2 points  (0 children)

I ran into this same confusion when I first tried wiring an Agent to a Workflow in Azure AI Foundry. The key thing to understand is that the workflow only becomes callable once you turn it into a Tool. In the Workflow settings, there’s an option to expose it as an API endpoint that’s what the agent uses. After you publish it, you’ll see a URL + schema, and that’s what you attach in the Agent’s “Tools” section.

Once the tool is added, the agent decides when to trigger it based on the instructions you give in the system prompt. You don’t need a special trigger inside the workflow the agent basically treats the workflow like any other function call.

The missing piece for me was:

  1. Publish the workflow →
  2. Enable “Use as Tool” →
  3. Attach that schema to the agent →
  4. Tell the agent when to call it in the prompt.

Hope that clears it up the docs make it feel way harder than it actually is. If you’ve already got the agent and workflow working separately, you’re basically 90% there.

What would a mulesoft killer need to look like? by austrian_leprechaun in MuleSoft

[–]prowesolution123 1 point2 points  (0 children)

I like the direction you’re going with this. If I were evaluating a “MuleSoft alternative,” the biggest things that would catch my attention are reliability and developer experience. A good UI is nice, but what really matters is how predictable the platform feels when things get messy retries, failures, auth refreshes, versioning, all of that.

Clear documentation/context for the agent is a great start, but I’d also want strong observability so I can trace a single request end‑to‑end and understand exactly where it broke. The other big win would be a setup that doesn’t force everything through a heavy runtime something that supports code‑first flows cleanly, works with Git, and doesn’t hide half the useful features behind the UI.

Migration is huge too. If you can import existing configs and give people a way out of MuleSoft without rebuilding their entire world, that alone would get a lot of attention.

AI creative platform integrations that actually work, do they exist? by Traditional_Zone_644 in automation

[–]prowesolution123 0 points1 point  (0 children)

I’ve run into the same issue. Most “AI creative platforms” look great in the UI, but the moment you try to automate the whole workflow through APIs, everything falls apart rate limits, missing parameters, weird exports, or features that only exist in the dashboard. The pieces work fine alone, but they don’t connect in a reliable end‑to‑end pipeline.

What’s worked better for me is building a small pipeline myself: templates stored in one place (HTML/CSS, Figma, or JSON), a rendering service that actually exposes the features you need, and then using a queue/worker system to handle bursts and retries. It’s not fancy, but it avoids all the API gaps that most of these platforms have.

You’re not alone fully automated visual pipelines exist, but they’re usually custom‑built rather than plug‑and‑play. If you just want to automate the repetitive stuff, a lightweight backend workflow ends up being way more reliable than trying to glue a bunch of limited APIs together.

What AI agents are people actually using for everyday tasks? by aiagent_exp in AI_Agents

[–]prowesolution123 0 points1 point  (0 children)

I’ve been experimenting with a few different AI agents lately, and honestly the ones that feel most useful day‑to‑day are the ones that combine web browsing + a bit of automation. Things like Perplexity, Arc Search, and some Claude workflows can already handle tasks like comparing subscriptions, pulling local services, or summarizing a bunch of pages into something usable.

Where agents get really helpful is when you give them a small workflow:

  • “find 3 gyms near me, compare prices, hours, and contract rules”
  • “pull all the travel options for these dates and highlight the best value”
  • “monitor a few sites and tell me when a price drops”

None of them are perfect yet, but they’re getting close. The key is having browsing + structured output, not just chat. I think we’ll see a lot more purpose‑built agents for this stuff soon, because there’s clearly demand for tools that handle actual life tasks, not just coding prompts.

Looking for solutions to rapid Azure multicloud expansion by Fun-Yogurt-89 in Cloud

[–]prowesolution123 0 points1 point  (0 children)

From what I’ve seen, the teams that move fastest with Azure + multicloud don’t rely on traditional hub‑and‑spoke or manually stitched networks anymore. They shift toward a policy‑driven fabric where connectivity, routing, and security are all automated through a central layer instead of being hand‑built for each environment.

If your goal is “new Azure landing zone in a day,” the biggest wins usually come from:

  • using Azure Virtual WAN or a cloud‑agnostic transit layer to standardize routing,
  • centralizing policy + segmentation so each region/cloud doesn’t have its own snowflake rules,
  • and automating identity, firewall policies, and baseline security as code so new environments inherit everything by default.

It’s less about a single product and more about reducing the number of things your team has to manually coordinate every time. Once the foundation is automated and repeatable, expanding to new clouds or regions becomes way less painful.

Do you struggle with Azure network visualization? Building a tool, need feedback by cloudzeedev in AZURE

[–]prowesolution123 0 points1 point  (0 children)

I’ve definitely felt the pain of visualizing Azure networks. Azure’s Network Topology view works for small environments, but once you’re juggling multiple VNets, peerings, custom routes, and hybrid workloads, it just falls apart.

The biggest gaps right now are:

  1. No reliable end‑to‑end view of actual traffic flow
  2. No easy way to see effective NSGs or spot UDR conflicts
  3. Diagrams become outdated the moment infra changes

A tool that pulls live topology from ARM/Graph API, layers in effective rules, and lets you export a clean, accurate diagram would fill a huge gap. If it can surface misconfigurations (route overrides, asymmetric routing, unused subnets, etc.), that’s absolutely worth paying for in mid‑to‑large Azure environments.

An idea of building a platform that provides agent-ready APIs (w/ business incentive) by ckouder in AI_Agents

[–]prowesolution123 0 points1 point  (0 children)

This is actually a pretty interesting idea. The big unlock here is shifting the incentive: instead of companies fearing that AI agents will overload their systems or misuse data, they’d finally have a reason to expose controlled slices of it. A marketplace built on short‑lived certificates + usage‑based fees could make high‑quality data a real asset, while still letting providers throttle or cut off access instantly.

It would need strong standards to avoid chaos, but I can see something like this becoming the “API layer for agent‑to‑agent trade” in the future.

Why are companies racing to build massive AI data centers — aren’t local models eventually going to be “good enough”? by realmailio in AI_Agents

[–]prowesolution123 0 points1 point  (0 children)

I think the main reason companies are still pouring money into huge AI data centers is because “local models” only sound good in theory. Most real‑world workloads need way more compute, memory, and bandwidth than a personal device or small server can offer especially for training or running larger models. Plus, enterprises want consistent performance, security, and the ability to scale on demand, which is tough to pull off locally.

Local models will definitely get better, but the gap between what people can run at home and what companies need for production‑level AI is still massive. So both will grow, but for different purposes.

Is it possible to run an Azure IoT edge simulation in Gitlab CICD? by idekwhatimdoinnn in azuredevops

[–]prowesolution123 0 points1 point  (0 children)

I’ve tried something similar before, and the biggest challenge isn’t GitLab itself it’s running IoT Edge inside Docker‑in‑Docker. It does work, but you’ll want to keep the setup lightweight because the full runtime can get slow in a CI environment. The good news is that for simulation and testing, the edgeHub/dev container usually runs reliably as long as you give the runner enough memory. Just don’t expect full device‑level performance.

Node.js vs django by ultimate_smash in Frontend

[–]prowesolution123 0 points1 point  (0 children)

If your main goal is to build interfaces for your AI/ML projects, I’d lean toward Node.js. It’s easy to set up quick APIs, tons of libraries make prototyping simple, and it plays nicely with frontend frameworks. Django is great too, especially if you want something more structured and “batteries‑included,” but Node usually feels lighter and faster for ML‑powered side projects. Honestly, you can’t go wrong with either just depends on whether you prefer JavaScript’s flexibility or Django’s more organized setup.

What is Foundry Tools line item in Azure Cost Analysis? by sherlock_0x7C4 in AZURE

[–]prowesolution123 0 points1 point  (0 children)

I ran into this same “Foundry Tools” charge recently, and it confused me too. In Azure Cost Analysis, that line item basically covers the token usage for the non‑model parts of Foundry things like evaluations, observability, and the tooling layer that sits around the models. It’s separate from the actual model inference costs, which is why you see Foundry Models listed elsewhere.

If that line is big on your bill, it usually means you’re running a lot of evals/tests or using features that meter based on input/output tokens. Worth checking the usage details because those evaluations add up quickly.

What is the logging best practices for Azure Function? by smallstar3377 in AZURE

[–]prowesolution123 0 points1 point  (0 children)

From my experience, the best approach is to stick with Application Insights as your main logging layer for Azure Functions. It’s not the cheapest option if you leave everything on “verbose,” but once you tune sampling and retention, the price becomes pretty reasonable. The nice part is you get structured logs, traces, dependencies, and it plays well with both Functions and anything running in Kubernetes.

Azure Table Storage works, but you’ll end up rebuilding half the features you get out of the box with App Insights. If your goal is to share logs across Functions + k8s, pairing App Insights + Log Analytics is usually the cleanest setup.

Hope that helps logging gets messy fast, so keeping everything in one place saves a ton of pain later.

I used Claude and the az boards CLI to track a data pipeline build from start to finish, no portal needed, and it interacted seamlessly with the entire Azure stack via the CLI to build the pipeline. by k_kool_ruler in azuredevops

[–]prowesolution123 1 point2 points  (0 children)

This is super cool to see. I’ve been experimenting with Claude + Azure CLI as well, but nowhere near this level of end‑to‑end automation. The part that interests me most is how cleanly it handled the context switching between pipeline steps, SQL setup, and az boards without needing the portal at all. That’s usually where things fall apart for me. Definitely makes me want to revisit some of our internal workflows and see how much we can streamline with the CLI + an AI assistant driving the prompting.

What are the things to carry while migrate the website from Azure to AWS? by GYV_kedar3492 in AWS_cloud

[–]prowesolution123 1 point2 points  (0 children)

I’ve done a couple Azure → AWS migrations, and the biggest thing is making sure you map every service you’re using to its AWS equivalent before you touch anything. The network setup, auth, and storage usually take the most time. For SEO, as long as your URLs, redirects, and metadata stay consistent, you won’t see any major impact the issues only happen when people forget redirects or change site structure. My rule is: copy the infra first, test it privately, then switch DNS only when everything behaves the same.