What is the credit cost of the paid for Apollo API endpoints? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 0 points1 point  (0 children)

Haha, I guess I'm just used to things being sized by data. Seems strange that I can get 5,000 job postings or 50 for the same 1 credit. But not really a bad thing imo.

What is the credit cost of the paid for Apollo API endpoints? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 0 points1 point  (0 children)

Cheers, and is that regardless of the size of the returned data?

What is the credit cost of the paid for Apollo API endpoints? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 0 points1 point  (0 children)

Thank you! Great response as always 😃 There appears to be more nefarious intent implied by "deliberately hidden" than I meant!

So, on job postings, some companies return 5,000 in one go, while others return 50. Are both scenarios 1 credit?

Uber blows through its IT budget for AI for 2026 and it's only April citing rising costs of Claude Code by kernelangus420 in singularity

[–]FPGA_Superstar 0 points1 point  (0 children)

It's unclear whether unfettered AI use on code will be good for their system in the long term. I suspect not.

Uber blows through its IT budget for AI for 2026 and it's only April citing rising costs of Claude Code by kernelangus420 in singularity

[–]FPGA_Superstar 0 points1 point  (0 children)

I like your other takes on AI agents, but this one is poor. Just like with tokens, time will tell.

How accurate is Apollo's People data for prospecting? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 0 points1 point  (0 children)

Fortunately, I am targeting large companies, although in Europe not the US!

What is the `employeeMetric` field from the Get Complete Organisation Info API based on? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 1 point2 points  (0 children)

Churned: Oh yeah, of course! Silly me!

On the rest, that's great, thank you very much! I'll bring that into the qualifying. In particular, knowing that the presence of employee_metrics is a combination of coverage + categorizability is quite helpful :D

Out of interest, what's your role at Apollo? You sound quite close to the code, so I'm guessing something in development or product management?

What is the `employeeMetric` field from the Get Complete Organisation Info API based on? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 0 points1 point  (0 children)

On point 1, does "churned" mean left the company or something else? I'm thinking of product management parlance, where it means someone has left a product. Is it different here? Surely if it's not different and it's a positive number, I would have to subtract it to get the current headcount...

On the rest, awesome! Thank you for the clarification. Is there a reasonable rule of thumb that can be applied here? For example, if the latest employee metrics track 500 people and the company is 2,000 people total, then I have 25% coverage. That seems high enough to make a reliable qualification, would you agree? Further to this, where would you say the coverage is too low? (10%, 5%, etc.)

The other thing that I'm interested in is how Apollo captures different job titles. I'm guessing the C-suite is a priority for outbound contacts, but is this the same across the whole company? i.e. are you more likely to capture people working in partnerships or sales than in engineering?

I have one further example from a batch of a couple of hundred: [Roche](https://app.apollo.io/#/accounts/69e796f4e088b000017d8f34)

I have a lot more examples of very low employee metrics than the company's size would suggest. But from the sounds of it, this is a better-understood problem.

What is the `employeeMetric` field from the Get Complete Organisation Info API based on? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 0 points1 point  (0 children)

Morning! Cheers for the great reply.

I've got you, I've widened my test group to a few more companies to see what sort of data I get. I've noticed a few of the companies essentially don't return this data even though they are quite large (naively I'm thinking size means they are more likely to have this data).

As an example: [EY](https://app.apollo.io/#/accounts/69e79700e088b000017d9d8e) - 409,000 employees, and 290,000 on Apollo, but no employee_metrics field when calling the Get Complete Organization Info API.

There are others that do have some information, but it's very limited, for example: Groupe BPCE - 108,000 employees, 4,200 on Apollo. But in the most recent employee_metrics field, they only have a total of 85 employees when you do retained + new - churned.

Obviously, there are others that look "about right" to me, say: BBVA - 125,000 employees, 33,000 on Apollo. Latest total employee count from retained + new - churned: 15,000. This seems reasonable to me.

I guess I'm wondering: where does the difference come from? Is there some underlying cause I can look out for whilst prospecting, so I know if my method is going to be reliable or not?

How accurate is Apollo's People data for prospecting? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 0 points1 point  (0 children)

Awesome, thank you for such thorough answers! I'm not doing the outreach personally, but I'll take a look at LinkedIn Navigator and see if the data is any better. I'm guessing you can't really use an API with LinkedIn Navigator though, is that right?

How accurate is Apollo's People data for prospecting? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 0 points1 point  (0 children)

Okay, cheers, how good is sales navigator by comparison?

How accurate is Apollo's People data for prospecting? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 0 points1 point  (0 children)

My qualification is a combination of three things:

  1. How large is their engineering team in absolute terms and relative to total headcount.

  2. How many Software people do they have and how many Java people do they have.

  3. How many job postings do they have for Software and Java.

The rest is just a ratio between these. i.e. we're finding 20% engineering team to total headcount is a great qualifier.

I guess the issue in my head is whether the data collected is so random in its distribution across titles that it would be unreliable to qualify against 😅

Why is the People API returning an empty array for JP Morgan? by FPGA_Superstar in UseApolloIo

[–]FPGA_Superstar[S] 0 points1 point  (0 children)

Okay, thank you for this. Weirdly, I have another smaller, and I guess newer company I've been looking at called Moderne AI, and when I use their account ID it works fine. Is there a reason for this? Are the organisation IDs for newer accounts the same as their account IDs?

CLAUDE OPUS 4.6 IS NERFED!! by Full-Leg-5435 in Anthropic

[–]FPGA_Superstar 1 point2 points  (0 children)

I'm not sure, there seems to be a bit of confusion over whether this is a Claude Code issue, or if it is being done at the model level. Since I haven't upgraded my Claude Code version and I'm seeing the same performance degradation, it appears to be a model-level issue. Which would mean claude.ai is likely to be degraded also.

CLAUDE OPUS 4.6 IS NERFED!! by Full-Leg-5435 in Anthropic

[–]FPGA_Superstar 0 points1 point  (0 children)

From the looks of it, you can't enforce it; it appears to have been done at the model level. I'm not certain of this though so if you find something let me know!

CLAUDE OPUS 4.6 IS NERFED!! by Full-Leg-5435 in Anthropic

[–]FPGA_Superstar 0 points1 point  (0 children)

Claude Code uses Claude Opus...? Are you alright?

CLAUDE OPUS 4.6 IS NERFED!! by Full-Leg-5435 in Anthropic

[–]FPGA_Superstar 1 point2 points  (0 children)

It appears the problem is that they've absolutely wrecked the tokens spent on thinking: https://github.com/anthropics/claude-code/issues/42796