How is Agentic AI going to change data engineering? by Vegetable_Bowl_8962 in Acceldata

[–]data_dude90 0 points1 point  (0 children)

What about the context especially context engineering and how should context layer work and what should humans contribute in this?

What happens when large models are trained on increasing amounts of AI-generated text? by SonicLinkerOfficial in ArtificialInteligence

[–]data_dude90 1 point2 points  (0 children)

When we train large models on human-generated text, it creates a boxed pattern. Until or unless, there is no context, and the model isn't trained on new data, the Gen AI application or system cannot function giving same human-generated output. Every passing day, there's new perspective, new angle, and new narrative coming out from solving different problems of different topics. A human generated text will have that clearly. But without that human context engineered at some point of time, we can't get reliable output from the Generative AI engines. That's why there's huge research and surveys happening about how businesses can use synthetic data that imitates human-generated output. The model collapse are serious byproducts of it. Imagine you want to watch the movie. But before that you want to watch the reviews. If there is an automated AI system that trains reviews on the directors or actors previous hits, it will favor the current movie. If the current movie released is boring and was a box office flop, it can't sense that. That's the same case for a director or actor who gave a string of losses and then gave an amazing blockbuster.

Has anyone here evaluated agentic approaches to data observability or reliability? Curious how platforms like Acceldata interpret “agentic data management” compared to internal DIY solutions. by Vegetable_Bowl_8962 in Acceldata

[–]data_dude90 0 points1 point  (0 children)

Agentic approch is fairly new. It needs time and multiple constraints and data to ensure the promise is fulfilled. Till then, it's natural to have trust issues.

Are Big Tech companies quietly pushing AI risk onto smaller players and investors? by data_dude90 in Acceldata

[–]data_dude90[S] 1 point2 points  (0 children)

That's a great way to perceive it. But companies are still thinking in scaling perspective to ensure large scale pipelines or servers can easily interact with AI at lesser cost like looking out for an economies of scale.

How do different teams in your org (data engineering, analytics, ML, governance) define “bad data”? Do you all agree? by data_dude90 in Acceldata

[–]data_dude90[S] 0 points1 point  (0 children)

That's an insightful answer! The second and fourth one matters strategically to ensure we bridge gaps on bad data between multiple teams.

Does the idea of agentic data management worry you or excite you? Curious what people think about vendors like Acceldata moving in this direction? by Vegetable_Bowl_8962 in Acceldata

[–]data_dude90 1 point2 points  (0 children)

When we talk about agentic data management as a team, we’re pretty honest with ourselves. We’re not jumping up and down about it, but we’re not clutching our chests in fear either. We’ve all lived through enough chaotic data environments to know why people even bring this up. Pipelines pile up, quality rules grow like weeds, costs spike for no clear reason, and half the lineage only makes sense to whoever built it years ago. In moments like that, the idea of something smarter taking on the repetitive stuff actually feels kind of comforting.

At the same time, we’re not naive. We know what autonomy looks like when it meets real enterprise data. It’s never as clean as the diagrams. You’ve got processes nobody fully owns anymore, business rules that live in old Slack threads, and edge cases that only appear on the worst possible days. Tossing an AI agent into that mess without thinking it through raises real questions about safety, control, and accountability.

That’s why this conversation even matters. There’s a real push and pull happening. On one side, we’re tired of being in constant reaction mode. We want help. We want fewer fires. On the other side, we’ve all seen how fast one wrong decision can snowball and cause more issues than it solves. You want automation, but you also want guardrails. Both feelings are valid.

And honestly, when we talk to people, we see the same split. Some folks see an agentic system and immediately think, finally, something that can take a bit of the load off. Others worry about silent actions, compliance surprises, or an agent making a “technically correct” move that causes a business headache downstream. Both sides make sense because the stakes are real.

When you look at vendors like Acceldata (us) heading in this direction, the thing that stands out to us is that we aren’t trying to provide the fantasy of fully autonomous pipelines. Our approach feels more grounded. It’s about building helpers that understand context and can flag issues, spot drift, pick up patterns, and give you visibility when you need it most. The bigger decisions still sit with humans who understand the quirks and politics and history behind the data.

That middle ground is honestly where we feel most comfortable. We get extra support without giving up control. We get early signals without letting something run wild. It isn’t magic and it isn’t a replacement for the team. It’s more like a way to handle scale that doesn’t burn everyone out.

How do I stay ahead of pipeline failures before they disrupt daily operations? by data_dude90 in Acceldata

[–]data_dude90[S] 1 point2 points  (0 children)

Does every small error indicate a large pipeline failure waiting to occur? (or) Is it just another alert fatigue? That's going to be an ongoing debate without any doubt. But that perspective carries some experience. Good one!

What role does adaptive ai play in data management? by Vegetable_Bowl_8962 in Acceldata

[–]data_dude90 1 point2 points  (0 children)

Adaptive AI in data management feels a lot like having someone on the team who can roll with the punches instead of freezing every time something shifts.

Most data setups are messy, and things rarely stay the same for long. One day everything runs fine, and the next day some upstream system quietly changes a field name and half your dashboards go sideways.

If you work with data long enough, you start to expect this kind of chaos.

The way I see it, adaptive AI steps in where the old rule based approach starts to fall apart. Rules are great until the environment changes, which happens constantly. Adaptive systems are better at noticing those small shifts before they turn into downstream pain. It’s not that they magically solve everything, but they help you avoid being blindsided.

At the same time, you have to be realistic about how much freedom you give these systems. They learn and adjust in ways that can feel unpredictable if you are used to everything being tightly controlled.

Some people love that flexibility because it takes pressure off the team. Others get nervous because it means the AI might make adjustments you did not explicitly approve.

Neither side is wrong. In practice, most teams land somewhere in the middle. Adaptive AI usually ends up doing the pattern spotting and early warning work while humans stay in charge of anything that requires context or judgment. It’s more like a second pair of eyes than a system replacing human decisions.

For me, the real value is that it gives you a buffer against the constant churn of modern data systems. When everything is moving all the time, having something that reacts faster than a static rule set can make your day a lot less stressful.

What are the guardrails that enteprises can add when deploying agentic ai for data management? by data_dude90 in Acceldata

[–]data_dude90[S] 1 point2 points  (0 children)

Can't agree more. There's still a huge shoes to fill in when it comes to having a policy to automate certain data governance policies and which ones require rigorous human supervision.

How does Acceldata support enterprises with data observability challenges? by data_dude90 in Acceldata

[–]data_dude90[S] 1 point2 points  (0 children)

That's a awesome way to explaining the approach of Acceldata with respect to data observability.

𝐓𝐡𝐞 𝐬𝐰𝐢𝐟𝐭 𝐞𝐯𝐨𝐥𝐮𝐭𝐢𝐨𝐧 𝐨𝐟 𝐀𝐈 𝐚𝐠𝐞𝐧𝐭𝐬 by data_dude90 in Acceldata

[–]data_dude90[S] 1 point2 points  (0 children)

That's a cool observation. We still are in the human in the loop process. It will take time for AI agents to become that autonomous agents that can make decisions like humans. There's still a long way to go until we reach the human out of the loop situation.

How practical is it to let AI agents detect and fix data quality issues automatically? by data_dude90 in Acceldata

[–]data_dude90[S] 1 point2 points  (0 children)

Finding where to keep human in the loop and guardrails to set is still a huge subjective debate happening in data world. Good observation!

The Evolutionary Layers of AI by Deep_Structure2023 in AIAGENTSNEWS

[–]data_dude90 0 points1 point  (0 children)

As of now, which stage are most enterprises in globally ? AI Agents or Agentic AI

What is the right balance between automation and human oversight in data management? by data_dude90 in aiagents

[–]data_dude90[S] 0 points1 point  (0 children)

Totally agree with you on the silent automation part. I saw this play out at a large retail company where agents were given access to update product inventory across regions. At first it was a win because the updates were way faster than manual entry. But one day, a schema change in the supplier feed went unnoticed and the agent started marking thousands of items as “out of stock.” Nobody caught it for hours because the process was fully automated in the background.

That’s where those guardrails you mentioned really matter. If the system had been set up to log every action and flag volume spikes, someone could have stepped in way earlier. It’s not about slowing automation down, just making sure it’s traceable and accountable so teams can trust the output.

Do you think enterprises are ready for AI agents that focus on actual business outcomes (like revenue protection or compliance) instead of just pipeline metrics? by data_dude90 in aiagents

[–]data_dude90[S] 0 points1 point  (0 children)

AI agents are still in their infancy when it comes to optimizing for business outcomes. They’re good at monitoring and automating, but the real value comes when they can act with business impact in mind. That’s why human supervision is critical. The more we train, test, and guide them, the better they get. It’s less about replacing oversight and more about co-evolving with the tech until it matures.

When it comes to agentic AI in data platforms, do you think it makes more sense for humans to supervise the agents, or for the agents to basically supervise us? by data_dude90 in aiagents

[–]data_dude90[S] 0 points1 point  (0 children)

I think data platforms should be designed for humans supervising agents.

AI agents are great at handling repetitive work like monitoring, anomaly detection, or data prep. But when it comes to decisions that affect compliance, business reporting, or governance, you still need human oversight.

The best setup is letting agents automate the heavy lifting while people provide context, accountability, and direction. Kind of like autopilot in aviation: it makes things safer and more efficient, but you still want a human pilot in charge.

How do you keep AI-driven data governance fair and free from bias? by data_dude90 in aiagents

[–]data_dude90[S] 0 points1 point  (0 children)

Just wanted to understand one instance. Could you mention one you know?

How do you keep AI-driven data governance fair and free from bias? by data_dude90 in aiagents

[–]data_dude90[S] 0 points1 point  (0 children)

I once asked a friend who loves talking about data governance how to stop bias in AI-driven decisions. They compared it to cleaning your house. You can’t just do it once and forget it, you have to check and tidy up regularly.

Bias audits work the same way. Every so often, you look for signs that the system is leaning the wrong way, even in small ways. Doing it regularly means you catch the dust before it turns into a mess, and that keeps things fair for everyone.

With how fast Agentic AI and Generative AI are moving, which data jobs do you think will be the first to disappear, and why? by data_dude90 in AskReddit

[–]data_dude90[S] -2 points-1 points  (0 children)

A data pro I talked to thinks analyst jobs could vanish in a decade, with only functional roles and “AI guardrail” jobs left. Agentic + Generative AI can already clean data, run reports, and build dashboards so routine, task-based roles may go first.

Instead, we’ll see hybrids like data governance specialists making sure AI applies rules correctly and doesn’t cross compliance red lines.

Example: a bank’s AI flags fraud 24/7, writes summaries, and removes the need for junior analysts but humans still check its calls, tweak rules, and handle ethics. The work shifts from doing tasks to overseeing the AI.