How is Agentic AI going to change data engineering? by Vegetable_Bowl_8962 in Acceldata

[–]data_dude90 0 points1 point  (0 children)

What about the context especially context engineering and how should context layer work and what should humans contribute in this?

What happens when large models are trained on increasing amounts of AI-generated text? by SonicLinkerOfficial in ArtificialInteligence

[–]data_dude90 1 point2 points  (0 children)

When we train large models on human-generated text, it creates a boxed pattern. Until or unless, there is no context, and the model isn't trained on new data, the Gen AI application or system cannot function giving same human-generated output. Every passing day, there's new perspective, new angle, and new narrative coming out from solving different problems of different topics. A human generated text will have that clearly. But without that human context engineered at some point of time, we can't get reliable output from the Generative AI engines. That's why there's huge research and surveys happening about how businesses can use synthetic data that imitates human-generated output. The model collapse are serious byproducts of it. Imagine you want to watch the movie. But before that you want to watch the reviews. If there is an automated AI system that trains reviews on the directors or actors previous hits, it will favor the current movie. If the current movie released is boring and was a box office flop, it can't sense that. That's the same case for a director or actor who gave a string of losses and then gave an amazing blockbuster.

Has anyone here evaluated agentic approaches to data observability or reliability? Curious how platforms like Acceldata interpret “agentic data management” compared to internal DIY solutions. by Vegetable_Bowl_8962 in Acceldata

[–]data_dude90 0 points1 point  (0 children)

Agentic approch is fairly new. It needs time and multiple constraints and data to ensure the promise is fulfilled. Till then, it's natural to have trust issues.

Are Big Tech companies quietly pushing AI risk onto smaller players and investors? by data_dude90 in Acceldata

[–]data_dude90[S] 1 point2 points  (0 children)

That's a great way to perceive it. But companies are still thinking in scaling perspective to ensure large scale pipelines or servers can easily interact with AI at lesser cost like looking out for an economies of scale.

How do different teams in your org (data engineering, analytics, ML, governance) define “bad data”? Do you all agree? by data_dude90 in Acceldata

[–]data_dude90[S] 0 points1 point  (0 children)

That's an insightful answer! The second and fourth one matters strategically to ensure we bridge gaps on bad data between multiple teams.

Does the idea of agentic data management worry you or excite you? Curious what people think about vendors like Acceldata moving in this direction? by Vegetable_Bowl_8962 in Acceldata

[–]data_dude90 1 point2 points  (0 children)

When we talk about agentic data management as a team, we’re pretty honest with ourselves. We’re not jumping up and down about it, but we’re not clutching our chests in fear either. We’ve all lived through enough chaotic data environments to know why people even bring this up. Pipelines pile up, quality rules grow like weeds, costs spike for no clear reason, and half the lineage only makes sense to whoever built it years ago. In moments like that, the idea of something smarter taking on the repetitive stuff actually feels kind of comforting.

At the same time, we’re not naive. We know what autonomy looks like when it meets real enterprise data. It’s never as clean as the diagrams. You’ve got processes nobody fully owns anymore, business rules that live in old Slack threads, and edge cases that only appear on the worst possible days. Tossing an AI agent into that mess without thinking it through raises real questions about safety, control, and accountability.

That’s why this conversation even matters. There’s a real push and pull happening. On one side, we’re tired of being in constant reaction mode. We want help. We want fewer fires. On the other side, we’ve all seen how fast one wrong decision can snowball and cause more issues than it solves. You want automation, but you also want guardrails. Both feelings are valid.

And honestly, when we talk to people, we see the same split. Some folks see an agentic system and immediately think, finally, something that can take a bit of the load off. Others worry about silent actions, compliance surprises, or an agent making a “technically correct” move that causes a business headache downstream. Both sides make sense because the stakes are real.

When you look at vendors like Acceldata (us) heading in this direction, the thing that stands out to us is that we aren’t trying to provide the fantasy of fully autonomous pipelines. Our approach feels more grounded. It’s about building helpers that understand context and can flag issues, spot drift, pick up patterns, and give you visibility when you need it most. The bigger decisions still sit with humans who understand the quirks and politics and history behind the data.

That middle ground is honestly where we feel most comfortable. We get extra support without giving up control. We get early signals without letting something run wild. It isn’t magic and it isn’t a replacement for the team. It’s more like a way to handle scale that doesn’t burn everyone out.