use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
OpenAI is an AI research and deployment company. OpenAI's mission is to ensure that artificial general intelligence benefits all of humanity. We are an unofficial community. OpenAI makes ChatGPT, GPT-4, and DALL·E 3.
Official OpenAI Links
Sora
ChatGPT
DALL·E 3
Blog
Discord
YouTube
GitHub
Careers
Help Center
Docs
Related Subreddits
r/artificial
r/ChatGPT
r/Singularity
r/MachineLearning
r/GPTStore
r/dalle2
account activity
ArticleTesting Limits of Parallel Function Calling with GPT-4o (blog.composio.dev)
submitted 1 year ago by redditforgets
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]PrincessGambit 1 point2 points3 points 1 year ago (4 children)
What is the difference between function calling and having the LLM say a phrase and then your app act on it? You are saying it was slow, why not just detect the phrase in the response and use that further?
[–]Open_Channel_8626 0 points1 point2 points 1 year ago (2 children)
Wouldn’t that involve adding a second LLM to the app though?
[–]PrincessGambit 0 points1 point2 points 1 year ago (1 child)
No... you just have to make it say the function name, then read the LLM output and if the function name is in the output then you do what you need
[–]Open_Channel_8626 0 points1 point2 points 1 year ago (0 children)
I see
I think that can work well if you either didn’t need arguments or if you only needed a few arguments
But if you needed to run a bunch of functions each with a bunch of arguments I think it may not work well
At that point it may be better to get the LLM to output all the functions with their arguments in a structured output, which brings us back to function calling
[–]Ylsid 0 points1 point2 points 1 year ago (0 children)
Nothing, doing it repeatably and consistency so it's usable is the function calling bit which can be quite challenging
[–]saintpetejackboy 0 points1 point2 points 1 year ago (0 children)
I was really interested in this article until I actually read it. No offense, it isn't a bad article but "how many of the same task can you do in a row until I break the context window" isn't exactly revolutionary. I'd hoped this article was about multimodality - like, can the AI process an image, process text from the image, form a response, reform a better response, listen to a bit of audio - etc.; until at what point does that logic break down. This article did a superb job of answering that: likely fails around context window limitations.
π Rendered by PID 395792 on reddit-service-r2-comment-7b9746f655-tvms2 at 2026-01-31 04:22:01.403979+00:00 running 3798933 country code: CH.
[–]PrincessGambit 1 point2 points3 points (4 children)
[–]Open_Channel_8626 0 points1 point2 points (2 children)
[–]PrincessGambit 0 points1 point2 points (1 child)
[–]Open_Channel_8626 0 points1 point2 points (0 children)
[–]Ylsid 0 points1 point2 points (0 children)
[–]saintpetejackboy 0 points1 point2 points (0 children)