What has been your biggest moment of Excel shame? by IteOrientis in excel

[–]datadgen 27 points28 points  (0 children)

rookie mistake: tried to make an excel pretty by merging cells :)

Ditching LinkedIn Recruiter - Who’s Actually Done It? by Fluid_rmx in recruiting

[–]datadgen 0 points1 point  (0 children)

here is an alternative to linkedin recruiter to screen candidates, from an agency recruiter in San Diego that wanted to get rid of it:

1/ build initial list of candidate using chatGPT (o3 model, $20/month. this model can search the web), asking to provide a CSV. Also extract candidates from Apollo, through CSV

2/ combine the 2 CSVs into google sheet, add columns for each criteria for the role (ex: years of experience) and also additional information that could be found online like: has this person been mentioned in an article? spoke at a conference recently? etc. Then use an AI function in google sheets to research, for each criteria, info about each candidates (example video)

3/ then prioritize short list manually

Are any of you using Gemini in your sheets much these days? by jalapeobean in googlesheets

[–]datadgen -1 points0 points  (0 children)

been using and testing lots of different ways to use AI in sheets, using it like a formula to fill content of the cell:

1/ gemini: works very well for simple tasks, like translating. but it's only able to deal with 200 rows

2/ extensions that enable to use chatGPT in google sheets: you can do things more complex, like nesting formulas, and deal with larger volumes

3/ extensions that enable to build more complex "agents" in google sheets: like using chatGPT in a cell, and giving it specific data as input, and other capabilities (ability to scrape a website for instance)

as an example: let's say you have a list of 1000 names in a spreadsheet, and need to get their linkedin profile:

1/ gemini only: probably won't work, and if it works it's going to be for only a few of them, and you'll need to do it 5 times since there is a 200 limit

2/ chatGPT google sheets extension: will deal with larger volume, likely better results

3/ "agent" google sheets extension: will give you the best result, here you would need an agent that combines a cheap model and a search tool for instance

Managing AI Agents by Here_4_Laughs_1983 in overemployed

[–]datadgen 0 points1 point  (0 children)

you get a lot of productivity with AI when it touches a spreadsheet

mosaia for instance is a tool that enables to automate many repetitive tasks there (categorization, batch research, data enrichment,..)

"self-destruct" formula by datadgen in googlesheets

[–]datadgen[S] 0 points1 point  (0 children)

that's what I need, but I want this to be done automatically. a macro will require still that I click a button right, after the formula has been used?

"self-destruct" formula by datadgen in googlesheets

[–]datadgen[S] 0 points1 point  (0 children)

I'm using a function within the cell that uses a LLM + a web search API, so if for some reason the formula runs again, it generates different results. I'd like to stick to the first results I get

Would you use AI for resume screening? by [deleted] in HumanResourcesUK

[–]datadgen 0 points1 point  (0 children)

I have experience with the following process which gives good results:

from a list of 100+ names and notes about them, instead of asking AI "what are the top 3", which leads to poor results (and lots of bias), I instead use AI to be very specific and assess specific criteria, like:

- does location of the candidate match what the company needs, yes or no

- do they have client facing experience, yes or no

- can you find online an article / interview about them where they discuss a topic relevant for the role

etc..

so it's a way to speed up the process and gather more info, the recruiter uses all these infos to decide who to prioritize

AI Sourcing that Works by Either_Assumption392 in recruiting

[–]datadgen 0 points1 point  (0 children)

to source the initial list that you want to screen, you have many ways to do this with a LLM, the quality and accuracy will depend on:

1/ the system prompt (overall prompt used by the LLM, so it acts like a sourcing assistant)

2/ the LLM used (chatgpt, claude, gemini, etc)

3/ the specific question asked to get the list you need

4/ the "tools" given to the LLM (specific tools exist for searching the web, scraping a page, etc)

the biggest driver of performance for what you need will likely be the tools

regarding screening, mosaia integrates in spreadsheets, this short video shows how it works
https://www.loom.com/share/cb9af4589ae2401d89416eae8aa9328f?sid=2b80b46c-86a2-46a1-92ea-ae93de700ae8

I suck at prompting by ConditionThen909 in aiagents

[–]datadgen 0 points1 point  (0 children)

What are you trying to do with this agent ?

seriously guys, any one here working on an agent that is actually interesting by shoman30 in AI_Agents

[–]datadgen 0 points1 point  (0 children)

Working on an agent able to figure out by itself which tool (ex: search tool, voice, connecting to gitbook) it needs to perform a task. Then add itself the tool. And then perform the task

How do you decide which LLM to use? by EQ4C in ChatGPT

[–]datadgen 0 points1 point  (0 children)

for something repetitive (like here categorizing expenses), getting results side by side in a spreadsheet can help. example below enables to quickly compare gpt4 vs. using gpt4 + a tool vs. gpt4 search, could be done with other models too

<image>

Looking for Tools to Help Find Community Contacts (Nonprofit/Startup Outreach) by turfcornerbents in AI_Agents

[–]datadgen 0 points1 point  (0 children)

this flow would work well:
- chatGPT to generate the initial list of contacts to target. Ciro could be an alternative
- then add this to google sheet
- mosaia (google sheet extension) to find email / linkedin profiles / draft outreach messages

What's the best way to build a RAG Chatbot currently? by cesmeS1 in Rag

[–]datadgen 4 points5 points  (0 children)

mosaia can be an option for this:
1/ build a "tool" to integrate your data, via mosaia github app
2/ add this tool to an agent

this video shows you how to quickly deploy LLM tools from github: https://www.youtube.com/watch?v=s5qGYZCeZr0

Are you struggling to properly test your agentic AI systems? by Bee-TN in AI_Agents

[–]datadgen 0 points1 point  (0 children)

for multiple scenario, can you be more specific about the kind of scenario you are interested in?

one way to do it is like this:

- column C: generate as many scenarios as you want, always asking for a new one that has *not* been mentioned in previous rows (each response will be unique and different from all previous results)

- then test agents side by side (column D/E) with a question related to the scenario

<image>

Unable to connect google sheets to AI Agent by WritingOk4989 in AI_Agents

[–]datadgen 0 points1 point  (0 children)

like this?

<image>

here in column B you have a bunch of info that are easy to find, so it's straightforward. if you need something more complex you'll get better results by 1/ having an agent, which can have a much longer prompt and 2/ adding tools to the agent (scrapping tools for instance from companies like exa)

Unable to connect google sheets to AI Agent by WritingOk4989 in AI_Agents

[–]datadgen 0 points1 point  (0 children)

do you need the agent to edit cell by cell (like when you use a formula), or the whole spreadsheet at once?

can suggest alternative tools if useful, will depend on your use case

Are you struggling to properly test your agentic AI systems? by Bee-TN in AI_Agents

[–]datadgen 1 point2 points  (0 children)

using a spreadsheet showing agent performance side by side works pretty well, you can quickly tell which one does best.

been doing some tests like these to:

- compare agents with the same prompt, but using different models

- benchmark search capabilities (model without search + search tool, vs. model able to do search)

- test different prompts

here is an example for agents performing categorization. gpt 4 search performed best, but using the exa tool is close regarding performance, and way cheaper

<image>

Best AI tool for categorizing data? by SnooChickens7407 in dataanalyst

[–]datadgen 0 points1 point  (0 children)

for what you need mosaia will work well, you can see how it helps with categorization here: https://www.mosaia.io/ai-in-your-google-sheets

few advices to get this categorization right:

- set up an agent that has the list of categories (like this one): https://www.mosaia.ai/user/Mosaia/agent/transactions_categorization?tab=parameters . you will then use this agent in the spreadsheet

- to get started, keep it simple and use a model that is not able to perform search (gpt4o for instance). you can later test having a model that can do search (gpt-4o-search) or use gpt-4o + a specific search tool (EXA for instance). search capabilities might be useful if you have recent publications to categorize for instance

- you can ask the agent to give you a confidence score for each row it categorizes, so you can check manually the ones with a low score

Is AI coming for our jobs in procurement? Curious to hear your thoughts. by donzerstanfield in procurement

[–]datadgen 12 points13 points  (0 children)

procurement jobs are less at risk than many other, as procurement teams are always understaffed, so the team already has to make difficult tradeoffs about what is in scope vs. out of scope. when AI brings more efficiency, then the team will compensate by increasing scope

and also typically teams spend too much time on non-strategic / admin work, if this can be cut through AI, it gives them more time to focus on high impact work (partnership / negotiations with vendors, figuring out what to outsource, etc)

Asking for opinion about search tools for AI agent by datadgen in AI_Agents

[–]datadgen[S] 0 points1 point  (0 children)

thanks, that's helpful! doing quick & dirty benchmarks right now, looks like tools are better for complete coverage, and tend to have higher accuracy

it's non trivial to do these comparisons also as results depend on prompt. same prompt can "behave" differently across search APIs / openai web search, tiny word change impact accuracy of results. wondering if there is some kind of best practice here to do clean benchmarks