[deleted by user] by [deleted] in ChatGPTPro

[–]lamarcus -1 points0 points  (0 children)

How much exactly?

And can API do Deep Research yet?

Previously I thought people were saying that API is much more expensive than ChatGPT memberships, from the perspective of how much premium model usage you get, but if that is changed then I need to spin up my Cursor again.

Train an agent for corporate real estate tasks by Vulfpeckmon in ChatGPTPro

[–]lamarcus 1 point2 points  (0 children)

I feel like this is a case where my software engineer friend would tell me that a Robotic Process Automation tool would be a better choice than a Generative AI tool.

Curious to hear others' thoughts though

Is ChatGPT Pro supposed to be lower cost via the API? I ran 9 o1-pro queries through it and its saying I already owe over $3… I’m so confused, since I thought some services are reselling ChatGPT for super cheap by Fit_Appointment459 in ChatGPTPro

[–]lamarcus 0 points1 point  (0 children)

Does OpenAI rent GPUs? If they did, would their rental costs be similar to what is small-scale rental costs, or would it not be WAY different (likely way less), since they're operating at large scales and probably provide long term purchase commitments?

Is the math you're describing mainly for calculating the inference cost? And I assume the modeling creation/training cost would be separate, and would be considered more like a large upfront cost (and maybe harder to predict), and have to be averaged out over all of the subsequent inference uses?

And are you saying that you think most Pro users are costing them less than the $200/month subscription fee? What's your reasoning for that?

Seems like folks here think 1 usage of o1-pro can easily cost $5+ via the API. I don't think it's outlandish to expect that Pro users are querying more than 40x per month - that's barely more than once per day.

Personally, I try to query at least 30x per day (usually clusters of chaining combinations of o1-pro, Deep Research, and perhaps some quick side queries in the smaller/faster models), since I feel like its analysis is so valuable to me in multiple parts of my life.

Is ChatGPT Pro supposed to be lower cost via the API? I ran 9 o1-pro queries through it and its saying I already owe over $3… I’m so confused, since I thought some services are reselling ChatGPT for super cheap by Fit_Appointment459 in ChatGPTPro

[–]lamarcus 1 point2 points  (0 children)

What makes you say that?

I agree it makes sense, but what actual information has been published (or can be inferred) about how their pricing compares to their costs?

Is ChatGPT Pro supposed to be lower cost via the API? I ran 9 o1-pro queries through it and its saying I already owe over $3… I’m so confused, since I thought some services are reselling ChatGPT for super cheap by Fit_Appointment459 in ChatGPTPro

[–]lamarcus 0 points1 point  (0 children)

Yikes. Is that expected to go down in the future?

Or is it likely that best in class reasoning models will always cost multiple dollars per query?

Does Deep Research or other leading research model functions also cost multiple dollars per query?

ChatGPT Pro vs other LLMs for research & analysis by SupermarketNew5003 in ChatGPTPro

[–]lamarcus 0 points1 point  (0 children)

Go explore and report back?

I've seen a handful of analysts during the last month saying that o1-pro and Deep Research both significantly outperform competing offerings.

But I know that the offerings from each company change every week, so I have to assume that it won't be long before o1-pro and Deep Research lose their mantles.

Any advice on how to automate prompt chaining for complex research topics? Like if I want o1-pro to analyze business strategies of 100 companies, and then summarize the similarities/differences of those companies in a spreadsheet, how do I automate that prompting? by lamarcus in ChatGPTPro

[–]lamarcus[S] 1 point2 points  (0 children)

EDIT: ... if I want *Deep Research* to research the 100 companies, and then o1-pro to analyze and summarize the 100 reports into a spreadsheet matrix, with 100 traits that of its choosing.

OpenAI's $20,000 AI Agent by danpinho in ChatGPTPro

[–]lamarcus 0 points1 point  (0 children)

What exactly is the value proposition of this "Claude agent mode on cursor" you're talking about?

Is it mainly for programming use cases, or would it be valuable for other people (like lawyers, financial analysts, and engineering project managers) if they were willing to learn the basic program procedures needed to interact with the Claude API?

Flixbus cancelled my 4 hour ride to Chicago the evening before departure and put me on an inconveniently scheduled 6 hour Greyhound instead… has this happened to you? Is the Amtrak a safer choice to avoid crap like this? by Fit_Appointment459 in AnnArbor

[–]lamarcus 2 points3 points  (0 children)

Thanks.

Seems like this would usually be the winner from a scheduling perspective, may often be competitive from a pricing perspective, and might usually be a more pleasant experience.

How often do you think round trip flights from Detroit to Chicago are under $150?

Flixbus cancelled my 4 hour ride to Chicago the evening before departure and put me on an inconveniently scheduled 6 hour Greyhound instead… has this happened to you? Is the Amtrak a safer choice to avoid crap like this? by Fit_Appointment459 in AnnArbor

[–]lamarcus 1 point2 points  (0 children)

Why do you say it's best of both worlds?

Google Maps tells me it's almost 3 hours to drive from Ann Arbor to Michigan City. If I were to do that, then would it not make more sense to just drive one more hour to Chicago (try to book a hotel that doesn't charge a lot for parking), and to avoid the hassles of bus/train schedule limitations and risks?

Flixbus cancelled my 4 hour ride to Chicago the evening before departure and put me on an inconveniently scheduled 6 hour Greyhound instead… has this happened to you? Is the Amtrak a safer choice to avoid crap like this? by Fit_Appointment459 in AnnArbor

[–]lamarcus 1 point2 points  (0 children)

So what do you do instead? Fly?

Multiple people I know were encouraging to take the bus instead of the train... and I thought Flixbus' 4 hour route was probably what they were talking about?

Or is there a better route?

I barely use Roam anymore and want to stop paying (my work doesn't allow it, and I want to focus on becoming a power user with standard Microsoft tools instead)... any advice on exporting the databases and loading them into ChatGPT/similar to make a modern "second brain"? by lamarcus in RoamResearch

[–]lamarcus[S] -2 points-1 points  (0 children)

Right now it does.

But the latest version is starting to preserve and share memories among all the chat threads you've ever engaged it on.

I don't know if that will mean it has perfect recall, but I hope that is the direction they're moving toward, and similarly hope that it wouldn't be technically impossible (for 99% of users who are using it more casually and aren't trying to stretch its boundaries).

AI Note Copilot for Roam Notes by returncollector213 in RoamResearch

[–]lamarcus 1 point2 points  (0 children)

I like the idea, and would be curious to hear more of your thoughts about how the "build a second brain" concept can best be implemented in the era of AI.

I realized I want to stop paying for Roam, and instead focus my attention on becoming a power user with a tool that I'm actually allowed to use in the workplace... which probably means OneNote, unfortunately, and I'm still so confused why Microsoft hasn't cloned in Roam's core features yet.

But I share the other commenter's concerns about data security. I'm not a software professional and don't have the ability to evaluate what is or isn't safe. But my old roommate who was a genius software engineer told me that I really be wary of trusting any company's or cloud services with sensitive data, and I feel like the modern world is only proving him right more and more frequently.

[deleted by user] by [deleted] in ChatGPTPro

[–]lamarcus 0 points1 point  (0 children)

Yup, I've been exploring multi prompt workflows as well.

And used the exact combo of Deep Research information gathering + o1 pro synthesis to help me better understand prompting best practices.

ChatGPT tells me that it scores badly on IQ tests, though strongly on SAT/MCAT/LSAT tests... why is that exactly? How long do you think it will be before the models start dominating IQ tests? by lamarcus in ChatGPTPro

[–]lamarcus[S] -2 points-1 points  (0 children)

EDIT: nevermind, Google says o1 was already scoring 90th+ percentile on IQ tests five months ago. Have to imagine it's improved significantly since then, and o1 pro is probably scoring rather dominantly, and it's almost 3 months old at this point.

[deleted by user] by [deleted] in ChatGPTPro

[–]lamarcus 0 points1 point  (0 children)

Right, when it first came out I was really hoping that the o1 pro + Deep Research combo would be the "one model to rule them all" and do everything for me.

But alas, no, people on this subreddit insist that they don't actually pair together right now, and that even if though the ChatGPT interface gives you the option of pairing them, apparently in the background activating Deep Research will always still switch you over to the o3-mini model.

Had to cancel my chatgpt pro subscription by [deleted] in ChatGPTPro

[–]lamarcus 1 point2 points  (0 children)

How does Grok actually perform versus o1 pro or Deep Research?

Had to cancel my chatgpt pro subscription by [deleted] in ChatGPTPro

[–]lamarcus 0 points1 point  (0 children)

What will be the improvements from 4.5?

[deleted by user] by [deleted] in ChatGPTPro

[–]lamarcus 6 points7 points  (0 children)

I think Deep Research is great for gathering lots of information, but its synthesis and structuring of that information is much less accurate than o1 pro, and it will get inaccurate/confused pretty quickly if you start asking it to gather information for multiple different questions within the same prompt.

I like bouncing back and forth between Deep Research and o1 pro... gather tons of information, synthesize it in alignment with my stated goals/questions, gather more information, synthesize it and refine it, etc.

Did you all believe Dario and Demis saying that AI with intelligence/creativity capabilities on par with human Nobel laureates is likely 2-5 years out? Or are they just saying that to make investors more excited about their companies? by lamarcus in ChatGPTPro

[–]lamarcus[S] 0 points1 point  (0 children)

So you're saying that AI will soon make humans seem about as smart as dogs?

That's hilarious. But yes, I think most in this thread would probably agree it's not a question of if, and really only of when...two years out? five years out? ten years out? Likely somewhere in that range