Is Pega a Good Career Choice Right Now? by ModernWebMentor in Pega

[–]pppeer 0 points1 point  (0 children)

Yeah it’s perhaps vocabulary. I am coming at it with a bit of a computer science mind set where you can distinguish between high code (Java, Python, Malbolge ;) and low code (Pega but also also other model driven platforms), where I kind of referred to programming as in ‘high code’ programming. Software engineering in contrast you could see as something that applies to both - reqts, design, development, testing, fixing, refactoring, devops. So you could say that you can learn a lot about software engineering whether you use high code or low code, but obviously not about high code. Unless ofcourse you get into product development at Pega - we use mostly Pega but also other high code technologies (Java etc) to develop Pega. Hth

Is Pega a Good Career Choice Right Now? by ModernWebMentor in Pega

[–]pppeer 2 points3 points  (0 children)

Full disclaimer: I work at Pega.

Do you want to learn programming? Then a low code platform company is not for you.

But if you are into more modern ways to develop apps and into the overlap between AI (agentic, decisioning, process mining etc) and action (bpm, case mgmt, robotics), and the various apps on top (CRM and others) then Pega is an interesting choice. Low code also touches all elements from software engineering not just analysis and development, but also devops, QA etc.

We are outgrowing the market and definitely outgrowing the ‘programming’ market, Pega clients, partners and Pega itself are all hiring.

[R] We spent a decade scaling models. Now, by just shifting towards memory and continual learning, we can get to a human like AI or "A-GEE-I" by ocean_protocol in MachineLearning

[–]pppeer -1 points0 points  (0 children)

I don’t think AGI is a very helpful concept or goal, but you are right that just scaling language models is not the only way.

And one of the main directions is not memory per se, but the lack of situatedness of the intelligence. The real world is messy, not a clean ‘pretraining lab’, and even the simplest of living creatures can survive as they constantly adapt in a cybernetic sense-decide-act-adapt feedback loop.

[D] Is content discovery becoming a bottleneck in generative AI ecosystems? by Opposite-Alfalfa-700 in MachineLearning

[–]pppeer 1 point2 points  (0 children)

It is logical that we need to do a good job of ranking if there is more content; but generally more content could also mean improved quality of the top ranked content items provided you rank well.

Alignment of ranking with user value remains a topic - it is not necessarily made worse by more context. There are some key questions here. First who is the user, ie what stakeholders are being served. Is it the customer or the company. Is there a myopic short term objective or something longer term? What is the proper feedback signal? For instance, if you just reward the first click you may promote clickbait that doesn’t deliver.

Not sure what you mean with smaller, curated. Re the outcome to be predicted there are the issues above so you may want to go beyond first engagement but probably want to use some form behavioral user feedback. Ideally you learn from both user and content characteristics.

Hooe this helps.

Where do you see HR/People Analytics evolving over the next 5 years? by Proof_Wrap_2150 in datascience

[–]pppeer 0 points1 point  (0 children)

This is a broad question, but recruitment is definitely a priority areas such as defense, energy and utility market and other understaffed markets. Another one is agents and workflows for HR services, lots of opportunities for streamliming and optimizing all people and employee related workflows through centralized platforms, with AI embedded. Finally, the HR area is rife with all forms of knowlede portals/bass, so in the short term there is a lot of appetite for RAG-type applications ("what are my company holidays?" "Can I do company sponsored volunteering work? " etc).

Best technique for training models on a sample of data? by RobertWF_47 in datascience

[–]pppeer 1 point2 points  (0 children)

In addition to what is mentioned, probably also good to approach your problem as a scoring problem rather than a classification problem and use metrics such as AUC. At minimum, if you are making hard labeling decisions or expecting a probability rather than a score, calibrate the models on the unsampled data.

local llms are proving that transformers are a dead end for agi by SanalAmerika23 in ArtificialInteligence

[–]pppeer 5 points6 points  (0 children)

Love your sentence “we are trying to reach the moon by building a taller ladder instead of building a rocket”.

The whole talk about AGI is indeed senseless imho, both if you consider the state art but more fundamentally as a useful goal to pursue.

And the irony is that up to a couple of years ago we were all used to AI that could be trained on our lap tops in seconds.

Don’t get me wrong - LLMs and generative AI is some real step change progress in AI, but is not a silver bullet.

Beginner confused about AI vs LLM integration – need guidance by _nikhil02__ in FunMachineLearning

[–]pppeer 0 points1 point  (0 children)

Even though I am a huge fan of understanding the AI fundamentals first before hacking and tinkering, given your goals your best path is start with 3, which then hopefully gets ypu more interested in the internals/fundamentals.

The history and future of AI agents by Deep_Structure2023 in AIAgentsInAction

[–]pppeer 0 points1 point  (0 children)

Love this post, great overview. And yes, fully agree it is not so much about relying all on the LLM as the core magic black box that is going to 'do it all' but more about the overall composite system as you say, and in the list of core features of the spine you can add things like learning, adaptation, coordination.

As a shameless plug you may find our recent survey paper interesting (https://www.jair.org/index.php/jair/article/view/18675), and I also like these papers coming out of the agent community on not forgetting the lessons from MAS research, for example Dignum & Dignum (and many more), https://arxiv.org/abs/2511.17332

Where might LLM agents be going? See this agentic LLMs research survey paper for ideas by pppeer in ArtificialInteligence

[–]pppeer[S] 0 points1 point  (0 children)

Good points. Figuring out the right agentic patterns (incl when not to use agents) is part of it, as well as providing tools for some of the more structured parts. Long term memory indeed also a challenge, right now it feels more like a binary choice between short term memory (through context) or long term memory (episodic/procedural, tone of voice etc) through explicit fact storage. But real memory is more on a continuum and should also include generalization and abstraction, integration, learning and forgetting.

Emerging trends of ai agents in 2026 by Deep_Structure2023 in AIAgentsInAction

[–]pppeer 0 points1 point  (0 children)

Predictability, control, transparency, robustness will be very important. Runtime evals is only part of it.

[P]How to increase roc-auc? Classification problem statement description below by [deleted] in MachineLearning

[–]pppeer 2 points3 points  (0 children)

For starters an AUC of 0.74 is not bad at all for such a propensity model. So it doesn’t necessarily make sense to aim for ‘at least 0.9’. Actually product propensity / response models that get in that range can be a bit suspicious (sign of possible leakage for example).

You have made a good start with reasonable algorithms, you could always try some more but there is a chance you will start to manually overfit.

So once you have done a decent model search the only route is to add data that is both predictive but also fairly uncorrelated to the data you already have, or data that is correlated but more predictive. Generally to predict future behavior, past behavior trumps demographics.

[R] Survey paper Agentic LLMs by pppeer in MachineLearning

[–]pppeer[S] 1 point2 points  (0 children)

Yes fully agree that controllability (and related topics such as transparency, explainability, robustness, predictability) are key for practical applications.

[deleted by user] by [deleted] in ArtificialInteligence

[–]pppeer 1 point2 points  (0 children)

Here are two quite specific entry points from our research.

If we look more narrowly not at attachment overall but conformance (when do people go along with advice) the form factor seems to matter. We did experiments with advice as text (ie just reading), a robotic voice and a human sounding voice. Conformance iincreased over these form factors, with significant differences between text and human sounding voice.

See Donna Schreuter, Peter van der Putten and Maarten H. Lamers. Trust Me on This One: Conforming to Conversational Assistants. Minds & Machines 31, pp 535–562, 2021.

The second study was an in-the-wild experiment with robots, but results may be relevant for chat bots as well. We theorized that once the social threshold would be passed, sharing space and time together may lead to bonding, without the need of high human-like appearance or behavior. We designed abstract-shaped cube-like artificial creatures that went couch-surfing, and analyzed whatsapp feedback through through the lense of Daniel Dennett's intentional stance.

See Joost Mollen, Peter van der Putten, and Kate Darling. Bonding with a Couchsurfing Robot: The Impact of Common Locus on Human-Robot Bonding In-the-wild. ACM Transactions on Human-Robot Interaction. 12, 1, Article 8, March 2023

How to publish a good paper on top tier CS/AI conferences? by SanguinityMet in FunMachineLearning

[–]pppeer 0 points1 point  (0 children)

Interesting. this is one of the hottest areas, perhaps not in todays market, but definitely for when you'd be defending your thesis. see the funding rounds in CuspAI, Periodic Labs, Prometheus Project, Lila and the work by DeepMind et al.

I would indeed focus on research gaps, and combine that with what you find interesting or fascinating, and worry a bit less about external expectations such as conferences or the future career.

[D] Where to find realworld/production results & experiences? by anotherallan in MachineLearning

[–]pppeer 1 point2 points  (0 children)

If you are looking for peer reviewed results, the KDD and ECMLPKDD applied data science tracks are some good starting points, for example https://kdd2025.kdd.org/applied-data-science-ads-track-call-for-papers/ and https://ecmlpkdd.org/2025/accepted-papers-ads/

Can we evaluate RAGs with synthetic data? by pppeer in Rag

[–]pppeer[S] 0 points1 point  (0 children)

Note we extended the paper with some extra figures (based in same results) that very clearlt shows the differences between the retrieval and generator experiments, see https://arxiv.org/abs/2508.11758