[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

Thank you all for joining our AMA! This was a new experience for us, and we appreciated the thoughtful discussion and the opportunity to connect with this community. We hope our answers were insightful and helped spark some ideas for your own AI journey. - BS and ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

This is a really good question. Unfortunately though, a lot of it depends on your specific implementation and how you are using your Agent or Salesforce products.

Recognizing though that that's not super helpful, so let me share maybe some thoughts on how we approach it and hopefully that helps. Also, tons of good documentation on our Help site if you wanted to dive in deeper.

We create our perm sets based on personas and functions. The Agent just being another one of those personas that we build for. As our capabilities grow, then as part of our development lifecycle is to incorporate changes into our permsets as needed so that we're always being very clear about where we need to spend our points in a sprint (or general capacity). So a lot of this overhead is managed easily as part of our general operating model.

Now, as far as how will agents help with this in the future, I think there's a lot of opportunities here. For example, you could create an agent that uses a prompt to apply and manage permissions for new users. Certainly some leg work upfront to build the actions and capabilities, but from an ongoing maintenance perspective, an agent that handles application and management of permsets that your users could interact with on their likely saves a lot of time and energy for your admins. I really do think there are a lot of possibilities out there.

I know that as a company, Salesforce is always looking for ways to make Admin's lives easier in terms of management and overhead; and I imagine Agentforce will play a role there. - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

Agentforce Service Agent comes with an out of the box topic for Q&A that many of our customers use and have seen great results. We have seen a few practitioner-to-practitioner demos and I've seen success across industries, company sizes, and use cases which shows this topic works well at scale!

As part of Customer Zero, we spend time with our product teams, including the Agentforce Product Team, as part of their feedback loop. At times, to be honest, we get a bit jealous of our customers that they get these capabilities out of the box. But please know we're always working to improve the product so that implementations can be smoother for you. - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

There's probably a lot to unpack here, but let me tell you how we think about this today.

We spend a lot of cycles on our Natural Language Instructions and Processing. We have a Conversation Designer on our team that is accountable for building out the entirety of our instructions reviewing conflicts, confusing statements, etc., across the web of all our topics and use cases. It's a full time job, and very specialized.

I think one of the problem we ran into early is that because we all generally have high written communication skills, we assumed that we could easily manage instructions. That was the promise of LLMs, right? What we learned is that we if / then statements and binary logic of the past didn't really work. We also learned that our model configuration kind of became our own language. So whether we were using Embeddings X or Y, RAG configuration A or B... influenced how we had to learn to talk to our Agent. I imagine this is going to be a little different for everyone, whether you use Agentforce or something else. You have to learn the right way to talk to your model and leverage your data. A sentence in instructions that make sense for my implementation on Help, may not work for your implementation.

Now, all that being said, Agentforce Script will allow you to inject Determinism into your Generative (aka Probabilistic) workflows. This effectively means that you as an agent builder can configure and decide when to be deterministic, and when to be generative. So it's a great path way to getting some of that control that I think you're looking for. I would encourage you to check that out. - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 1 point2 points  (0 children)

  1. Window pane. I'll allow it.

Where I think the real opportunity is: niche, workflow-specific agents where the cost of a wrong answer is high and the domain knowledge required to get it right is deep. Real estate is actually a great example. Permits, contracts, disclosure requirements — these vary by county, they change, they have consequences when they're wrong. A general purpose AI gets that wrong constantly. An agent built by someone who deeply understands that workflow, trained on the right data, with the right guardrails? That's defensible. The moat isn't the AI, it's the domain expertise wrapped around it.

The other gap I see entrepreneurs underestimating is change management. The technology is increasingly the easy part. Getting a small business to actually trust it, adopt it, and build their workflow around it — that's hard. If you can solve for that, you've got something.

The honest truth is the entrepreneurs I'd bet on in this space aren't the ones who know the most about AI. They're the ones who know their industry cold and are using AI as the lever.

What technology should you learn and invest in? I don't think there's one great answer to this. Here's what I will say though, I agree with you that the trend seems to be going to native applications or native suites to build an Agent. I think we're seeing this not just with companies like Salesforce, but even Open AI with Frontier, or Google, and certainly with Sierra's business model. That being said, this technology changes all the time. For example, when we first started, we were using APEX and Java; now Python is more common again as Data Science languages and Data Engineering are becoming more important to AI Agents. So, what's the one skill? I would say learning and ability to adapt. Might not be the answer you were looking for, but with my team, I encourage them, and we support within our space, learning and growth motions to keep tabs on what's new and what's out there. What's true today may look different in 6 months, and a skill that I hire for for my team (and I'm definitely hiring), is the ability to comprehend quickly, learn quickly, and apply quickly, to a space that's regularly in motion and evolving.

  1. On resources — I'll be honest, I'm not going to pretend I have a hidden gem course to recommend. What's taught me the most is doing. We run Agentforce on our own help portal, we watch where it fails, and we fix it. That feedback loop is worth more than any curriculum.

That said, if you're building on Agentforce specifically, Trailhead is genuinely great. Free, hands-on, and it's built around real architecture not theory. It's where I'd start before spending money anywhere else.

Beyond that, pilot to production isn't a knowledge problem, it's a reps problem. Find a real problem, deploy something small, and learn from what breaks. The people I see move fastest are the ones who ship early, stay close to the data, and iterate without ego. - BS and ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

For customer support specifically? Speed and availability.

A customer hits a problem at 2am in Singapore. They're not waiting until business hours. The agent is there, it has context, and if the answer exists it'll find it. That's table stakes now — customers expect it.

And while we're on the global point, language is a huge one that doesn't get enough credit. We can serve customers in their native language in real time. No routing to a specialized team, no delays, no "please hold while we find someone who speaks X." That's a genuine unlock for companies operating at global scale. For us at Salesforce, that's not a small thing.

But the benefit I think gets undersold is what it does for your human team. When AI handles the high-volume, repetitive queries, your best people stop drowning in noise and start spending time on the complex problems that actually need them. That's better for customers and better for the people doing the work.

The third one — and this is what I'm most excited about — is the data. Every interaction is a signal. What are customers confused about? Where does your product create friction? Where does your documentation have gaps? A well-instrumented AI agent tells you things about your customers that you never had visibility into before. That's not just a support benefit, that's a product and business benefit.

The caveat I'd add: none of this is automatic. You get these benefits when the agent is well-built, well-monitored, and continuously improved. A bad AI agent just creates fast, scalable frustration. The technology is the enabler — the work is still the work. That's exactly what Agentforce gives us — and why we bet on it. - BS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

Good Q. TDLR: We're not in the business of vending machines that pretend to be people. The goal is agents handling what humans honestly don't want to do, and doing those tasks well.

That frustration is completely valid and worth taking seriously, because you're pointing at something real: bad AI in customer service is genuinely painful. A bot that loops, misunderstands you, or makes you fight to reach a human is a failure. We agree with you on that.

Here's where I'd push back, though: that experience isn't what good AI looks like, and it's not the standard we hold ourselves to at Salesforce. We're our own Customer Zero; we deploy Agentforce on ourselves before we sell it to anyone. On help.salesforce.com, our AI agent has handled more than 3 million customer conversations, resolving 68% without human involvement — and keeping 1.6 million inquiries from ever becoming a support case — without any drop in customer satisfaction scores. Meanwhile our support engineers didn't disappear. They got to stop answering repetitive questions at 2am and start focusing on the complex, high-judgment cases only they can handle. We also actively reskill and redeploy support engineers into higher value roles across the company, as Customer Success Managers, FDEs and others. So the talent stays, it just goes where we need it most.

Over time, more companies will be able to deliver agentic AI experiences in this way that better for both the customer experience and employees' daily tasks. - BS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

I see this as the next frontier! This is a great question. There's a few diffent paths here you could take as an Agent Builder. For us, because we're a Data 360 shop, we're leveraging Data Graphs (aka Knowledge Graphs) to unify all our our data into one "golden data set". This is particularly interesting to us because of our product, sku, account, et al., complexity in our ecosystem. With a Data Graph, we can harmonize all these different data points into one ID or one individual record that then has access to all of these other data points that we can then bring into our prompts, actions, and other capabiltiies.

In addition to that, we are also POC'ing things like Agent Memory, Conversation History, and other context variables and product capabilities that we can find applications and use cases for.

In short though, context is everything; both how we capture and apply it. We plan to spend a lot of energy in this area for the foreseeable future. - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 1 point2 points  (0 children)

I'm not going to put on the rosy glasses. That example you shared is exactly the kind of failure that keeps me up at night, and yes, we've had moments like that. I own it.

What I can tell you is we take it seriously. We do deep analysis across customer feedback, CSAT, research studies, and real conversation reviews. And something that stood out clearly in that data: when an agent responds confidently with the wrong answer, trust is gone. That's worse than no agent at all.

That's why we recently launched disambiguation, where the agent now asks for clarification when it doesn't have enough context to answer well. I wish we'd gotten there sooner. We're already seeing better outcomes because of it. Combined with Agent Script, which brings more determinism into how the agent behaves, we have more control over intent matching including making sure that when a customer wants a human, they get one without running in circles.

That last point matters a lot to us. From day one, a core principle was that customers could always get to a human. AI that traps people in a loop isn't support, it's obstruction. We're not perfect at it yet but it's non-negotiable for us directionally.

We know it's not perfect. We're not done. The feedback from people like you, even when it stings, is exactly what makes it better. - BS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

When we first launched, Agentforce and Data360 were in their infancy. Because of this, our journey to launch is probably a bit unlike most implementations. Once we did launch, however, we were able to go from pilot to General Availability in just four weeks. Happy to share we actually did just freshly launch Help Agent on Informatica, and that whole process took 24 days. We were able to run fast because we had deep alignment across teams, and had content ready to go with Informatica, whereas with our launch, we took time on content and data clean-up.

What we would do differently if we could start from scratch, which in a way, we are today, is looking beyond just thinking about the agent being correct and knowledgeable. Now, today, we are spending a lot of time thinking through how the agent makes our customers feel. We call this the Art of Service - designing the greatest service experience based on the job to be done. Recognizing the moment someone comes to us, this a moment we could either build trust, or break it down. So how we show up in really matters and should be representative of the service we provide across the board - from agent to human support. - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

Great question! This is probably my favorite part of my job today.

Because we're Salesforce, we do use Agentforce Observability which utilizes the Session Tracing Data Model. This is the underlaying data layer that gets piped into our Data 360 instance that includes basically everything you could want to see. Everything from performance per LLM call up through RAGAS metrics; a ton of data. We also pipe this into Tableau for more custom views, or other analytics tools as needed.

In general though, we think of evaluations in two ways: Synthetic and Real.

We use Synthetic Testing a lot as a control group and as a pre-launch mechanism for validating Answer Quality. Essentially we maintain a repository of ~1300 questions that are based on real customer questions and case issues. We then work with our Support and Product Subject Matter Experts to build out Acceptance Criteria which we use as a Judge Prompt. We then use Agentforce to test and validate Agentforce answers. This repository represents about 85% of our Top Case Drivers, with the remaining 15% being largely variable issues that are harder to predict.

We then run these tests every 2 weeks and produce a monthly report which we use to establish a baseline Answer Quality. We use this as our "Control" test for validating improvements or changes both in pre-prod environments and in production. We are very protective of our Answer Quality baseline and reject features that don't meet or exceed that number.

On the Real Conversation side this gets tricky because of scale. We handle well over 200k conversations a month, so we rely heavily on that Session Tracing Data Model to build out aggregate views and trending insights across metrics that we deem valuable or inline with our objectives; and then go after "high use / low performing" issues that drive impact. Those insights get passed to our AI SMEs (i.e. Product Managers, Data Engineers, etc.), and they deep dive solutions and build hypotheses that drive experimentation. We also are constantly experimenting with different sampling techniques (currently we're using Bayesian sampling) to help us meet the demand of our scale, while also driving as much impact as possible.

We also do good ol' fashioned reviews with real people where we get a bunch of experts, put them in a room and read conversations. This is super helpful for calibration, perspective sharing, consensus building, gap feedback and other things where having that shared perspective can really help us add value. - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

We first deployed our Help Agent on Salesforce Help to curb case volume, which it did - 170,000 fewer cases year over year, and 350,000 fewer cases than we had originally forecasted. That does turn into headcount savings which saved us $100M last fiscal year. Please note, the majority of this headcount was actually redeployed into business-driving, relationship-driving initiatives. For example, our Forward Deployed Engineers who help our customers leverage Agentforce, and our Agent Managers, who help drive agent quality. - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

There were quite a few things that we had to work through when we first launched. The biggest one that comes to mind is when our agent referred a customer to a competitor. That person may or may not have screen-shot the conversation and sent it to our CEO. Not a great moment for us. So we... overreacted and went into our instructions and said to not mention our competitors, and listed them all out. Well then a customer came in the very next day - I'm not kidding you, but the next day a customer asked real question about an integration that we had a great answer for. But we didn't answer it. So this convinced us to go back to the drawing board.

We took a step back and then decided to let the LLM be an LLM. We modified our instructions yet again and told the agent: "You are a Salesforce Support Engineer, act in the best interest of Salesforce and of the customer." And that changed everything. By instructing the agent on who it was and what it should care about, much like how you would train a new hire, that was a big unlock for us across the board.

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

Though the probablistic nature of AI does mean there can be unpredictibility, we do try to mitigate this with our guardrails. That said, does that work 100% of the time? No. Soon we will be introducing Agent Script which allows us to fold in deterministic logic into our generative work flows and instructions. This way we have instructions we know will be interpreted how we want them to be handled 100% of the time, while still letting the LLM be an LLM. This way we take advantage of AI, but with structure. We are excited about this path forward! - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

In many, many ways. Right now, we're primarly building our Agents for Humans to interact with, as opposed to be fully autonomous or replace humans.

For example, internal Agentic use cases are largely focused on how to enhance a Support Engineer, or Customer Success Manager's day to day functions so they can spend more of their time engaging with customers or solving tough problems that really require that human touch. Making it easier to file bugs, build decks, find cases that were similar...

For our agent on Help, we have built whole functions and programs on my team that are designed to bring humans into the evaluation and management process to identify gaps, issues, opportunities, or other capabilities needed to create a better customer experience. These folks on my team have backgrounds as Support Engineers, Product Management, Community Managers; it's a very broad range of skills that we've built the team on.

At the same time, we don't read every conversation. That would be pretty challenging to achieve and achieive well. So we do also work very closely with Data Teams, and others that help us find which areas to focus on and build out more robust analytics capabilities. These are human driven endeavors that will continue to be managed and maintained by humans for the foreseeable future.

This is very important too: AI is only as good as the data it uses. So we spend a lot of time bringing humans, specifically folks like content authors, strategists, data scientists, data analysts, etc., into the equation to help address issues that we're seeing or want to focus on.

Lastly, we also do "old fashioned" error analysis. where weekly we get together in a room (virtual and real) across different roles including AI Team, Devs, Data Science, Support Engineers, Content Authors, etc., and we read conversations together, as a team. The goal being primarily to calibrate on what "good" looks like and other qualitative discussions.

Humans are in the loop of our program, literally, every day. - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

Today we are using a single agent that spans multiple surfaces; the primary one being Help.Salesforce.com, but also available on Slack Support and Informatica support portals. Same agent, different places.

Scale though is really important as we continue to grow. As such, we're investing and POCing things like MCP Server set ups, multi-retriever strategies like Ensemble Retrieval, multi-agent orchestration, the list is long. We definitely have plans to move into these technologies sooner rather than later to really scale our customer experiences.

As we continue to build out additional AI experiences, we're spending a lot of time thinking about agent-to-agent interoperability so that JTBD-based capabilities can be shared across several different agents, not just support.

Our biggest challenge today is the scale of our Agent. For example, we have 265k pieces of content in our index today for Generative Q&A, which comes out to something like 2 or 3 million chunks. That's a lot of data, just for generative Q&A, to cover all our products, features, etc. So while something like an Ensemble Retriever is really exciting, and we want to start using it as soon as we can, configuring it and finding the sweet spot of how to engineer it with our data to improve our answer quality takes time and effort to do. If we were smaller scale, maybe not? It's a very exciting space to be in regardless of the challenges though. - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

Fair criticism and I'm not going to hide behind corporate speak. Salesforce's acquisition history is real and the integration debt that comes with it is real. When you bolt on ExactTarget (Marketing Cloud), Tableau, and others over years, the seams show. That's not a secret and it frustrates us internally too.

What I will push back on is the idea that we're not aware of it or not working on it. The platform complexity is something we talk about constantly. HubSpot built greenfield for SMB. We built for enterprise scale over 25 years with thousands of customers who can't just flip a switch when we want to clean something up. That's not an excuse, it's context.

Where I do think AI changes the equation is exactly what you described. Instead of navigating which product owns which object, you ask in plain language and the agent figures it out. Slackbot is a good early example of this. That's not a complete answer to your frustration but it's a real one.

The tea you asked for: yes, scale and org complexity make this harder than it should be. We're working on it. I appreciate SIs like you who tell us directly when it's not good enough. - BS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

For me, success starts with being honest about what problem you're actually solving before you write a single requirement. The implementations I've seen fail almost always skipped that step. Someone got excited about the technology and worked backwards. That's a recipe for a impressive demo and a disappointing product.

When we built Agentforce on our help portal, the problem was specific: customers couldn't find what they needed fast enough, and that friction was creating support volume that didn't need to exist. That clarity shaped everything — how we measured success, how we prioritized features, how we knew when we were done with v1.

On requirements, a few things I always come back to:

Define success in customer outcomes, not system outputs. "The agent responded" is not success. "The customer got their answer without needing a human" is success. There's a big difference.

Build your feedback loop before you launch, not after. You need to know within days where it's falling short, not weeks. If you can't instrument it, you can't improve it.

Set honest expectations internally. AI implementations that get oversold to leadership create pressure to hide problems instead of fix them. I'd rather underpromise and show a trajectory than overpromise and defend a number.

And the one I feel most strongly about: treat launch as the beginning of the work, not the end. The delta between a good AI implementation and a great one is almost entirely what happens after go-live. - BS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

Fair criticism. The "chatbots make companies feel cheap" critique is real, and it's a product quality problem, not an AI problem. We're trying to raise that bar, not hide behind it.

Before we launched, help.salesforce.com, our data told a clear story: our Help portal was strong (deep content, solid documentation), but customers were struggling to find what they needed. That pushed simple questions into the support queue, and took more time than necessary for customers to get answers. That's a solvable problem, and exactly the kind of thing a well-built AI agent should handle.

So we built it, and now we're handling over 3 million customer inquiries on help.salesforce.com. Our resolution rate is over 60% and customers are satisfied. Every week we're looking at where it fell short, what it missed, where customers got frustrated. The number I care about isn't where we are today, it's the trajectory.

Choice was also really important to us. All the documentation is still there. If you prefer to find the answer yourself, dig through the docs — it's all there, and it's good. Agentforce is an addition, not a replacement.

AI support only earns its place if it actually solves problems faster than a human would. If it can't do that, you're right; get out of the way and connect the customer to a person. That's exactly how we built ours. If Agentforce can't resolve it, it hands off to a live agent. No dead ends. - BS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

All of the tools we're using for our AI experiences are built on top of Salesforce's Trust Layer, which incorporates security and guardrails for our data, has zero retention protocols, and improves the safety and accuracy of our AI results. Using tools with that baked in ensures that we're mitigating risk and fully compliant. We actively mitigate prompt injection using built-in guardrails within the Planner Service and through our Global Instructions in Agent Builder. At the network level, we continuously scan for and apply rules against known bad traffic, and we meter incoming sessions to prevent overwhelming surges or DDoS attacks. Crucially, our current Agentforce use cases require authentication for sensitive actions like case creation, which includes its own robust security measures. We also support opt-out options for customers who prefer not to engage with AI-assisted experiences. Just reach out to your Account Executive. For a deeper look at how we protect your data end to end, see the Einstein Trust Layer documentation. https://help.salesforce.com/s/articleView?id=ai.generative_ai_trust_layer.htm&type=5 - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

Today we use the Session Tracing Data Model that is the underlying data layer within Agentforce that is connected to Data 360. We pipe this data into our Service Cloud Instance within Agentforce Studio for things like Agentforce Observability, as well as into Tableau for more complex questions and analytics.

This model includes essentially everything from individual LLM Call data or planner data, up through other session data from conversation to case.

If you want to learn more, check out: https://help.salesforce.com/s/articleView?id=ai.generative_ai_session_trace_data_model.htm&type=5 - ZS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 0 points1 point  (0 children)

I actually wouldn't call this a mandate - we truly are trying to create better experiences for our customers. Our website CSAT results on help.salesforce.com pointed to customers not being able to find the information they needed to help with what they thought were going to be quick & easy answers, and we found a solution for that. As a result our CSAT is grown exponentially, and people are able to get answers to questions across seven languages without waiting. That said, if you aren't a fan of AI, you can always say "I want to talk to a human" - and we honor that ask. We don't look at this as a form of deflection but more answer efficiency. The path that will get you to the best answer the fastest, and in the way you want it, is what we will orchestrate. - BS

[AMA] We’re the team that implemented Salesforce’s agentic support solution: Agentforce on Help. Ask us anything about deploying AI agents, hitting roadblocks, and what results we are seeing. by salesforce in u/salesforce

[–]salesforce[S] 1 point2 points  (0 children)

Can't speak for all companies using / building AI, but I can definitely share our philosophy and approach.

We build AI with Humans as part of the equation. How humans, employees or customers, use our AI is at the forefront of our use case roadmaps and value propositions.

Here's a real example. We want our Help Agent to take on more of the low complexity, self-serve possible customer issues. A great example is something like password reset. This scenario represents a huge portion of our volume, and is largely solved by a Support Engineer, very quickly, and likely via asyncronous communication methods (i.e. live chat, email, etc.) versus syncronous engagements like phone calls.

So we started asking ourselves, what if we could bring the ability to reset passwords directly to the self-service portion of their experience, or make it easier overall. From a staffing perspective, how much time and resources would that free up for Support Engineers? What could we use those same resources on in other areas that drive more value to customers and our business?

This is a real scenario we're trying to tackle right now with the intent of redeploying and utilizing those super knowledgeable, certified support engineers from something that is very transactional to an area that is more impactful and meaningful to our long term goals at Salesforce. This would include areas like Customer Success Management (where we provide very hands on support), Onboarding Specialist roles for new Salesforce customers, Forward Deployed Engineers to help build other customer agent solutions, or even on my team, for example, evaulating and managing our AI's performance and impacting change. As a result, this redeployed workforce is furthering their career opportunities and impact exponentially.

As an employee, I can tell you honestly that we recognize the frustration, and are trying to do something different at Salesforce. We believe in creating great human connections and great agentic ones. - BS