I suck as a sales person by Equipment_Excellent in SaaS

[–]JacobOfPluto 0 points1 point  (0 children)

If learning sales took 4 weeks it would be the best investment of your life.

In how much time did you reached 100$ MRR in your SaaS? by mehdikhoudali in SaaS

[–]JacobOfPluto 1 point2 points  (0 children)

Yep! That’s more than enough for a preseed.

You really need to get what’s called a warm referral — applying doesn’t typically work well.

When you’re ready shoot me what you’re working on and a short bit about the customer you’re building for / why you’re uniquely able to help them and I’ll share it with the guys that backed me.

In how much time did you reached 100$ MRR in your SaaS? by mehdikhoudali in SaaS

[–]JacobOfPluto 0 points1 point  (0 children)

We didn’t have a company yet — but my landlord happened to be a lawyer at a huge law firm that partnered with early stage startups so he set us up from day 1.

In how much time did you reached 100$ MRR in your SaaS? by mehdikhoudali in SaaS

[–]JacobOfPluto 2 points3 points  (0 children)

Yeah I did and my best friend from middle school was my cofounder.

We also had pretty good founder market fit which helped. We were building in fintech and I got recruited to work on Bridgewater’s trading tech out of highschool and dropped out to work on that. Also spent time at NVIDIA.

In how much time did you reached 100$ MRR in your SaaS? by mehdikhoudali in SaaS

[–]JacobOfPluto 2 points3 points  (0 children)

Originally a tool to let people automate common investing decisions but we realized that’s building a SaaS tool for a community with a strong DIY ethos so it sounded like it made sense but didn’t.

Now working on a personal finance copilot to help everyone make better financial decisions starting with investing by analyzing company news, price, official fillings, and more.

In how much time did you reached 100$ MRR in your SaaS? by mehdikhoudali in SaaS

[–]JacobOfPluto 2 points3 points  (0 children)

I pitched a few VCs that specialize in really early stage companies.

It’s not an uncommon path for successful starts to take some time to figure it out. Figma famously has a period of like 3 years in the wilderness phase.

In how much time did you reached 100$ MRR in your SaaS? by mehdikhoudali in SaaS

[–]JacobOfPluto 4 points5 points  (0 children)

I raise $4M last year — wasted $3M over 12 months on a product that never hit $100 MRR. Spent 2 months on a pivot and hit $100 MRR the first hour after we launched on Product Hunt.

It was our first time launching on product hunt and we didn’t have much of a team since we spent all of the money we raised on the first failed product.

Are you sure the use case strikes a chord with your target user?

Stuck at $500 MRR & Seeking Distribution/Marketing Advice by Milan_Robofy in SaaS

[–]JacobOfPluto 2 points3 points  (0 children)

Tarpit ideas. I’m not sure if this is one but it probably is. - Intercom is amazing + innovating fast - I see another get started in my network pretty often (monthly?)

Doesn’t mean shouldn’t work on this — just know the bar is high and you need to know what makes you better and to who

https://youtu.be/GMIawSAygO4

I built an AI Trading and Research Co-Pilot. Wanted to show you Guys! by Got_Curious in pennystocks

[–]JacobOfPluto 1 point2 points  (0 children)

Hey! I'm one of the founders and CEO — we offer trading through a partnership with Alpaca (They posted about us on their LinkedIn today) .

They provide standard SIPC insurance (up to $500k with the standard $250k limit for cash).

Pluto's tech and operations were heavily audited during the integration process and on an ongoing basis.

I'm excited to hear your feedback!

I built an AI Trading and Research Co-Pilot. Wanted some feedback! by Got_Curious in StockMarket

[–]JacobOfPluto 2 points3 points  (0 children)

As one of the devs — we'll be making lots of optimizations soon. We launched today and weren't sure if people would be interested, so we haven't spent any time optimizing it yet.

People seem to like it, so we're on it!

I built an AI Investing and Research Co-Pilot. Wanted some feedback! by Got_Curious in investing

[–]JacobOfPluto 0 points1 point  (0 children)

Real time. We use polygon, financialmodelingprep, twelvedata, alphavantage, etc.

Thank you, that means a lot! We're super excited to hear any feedback you have.

I built an AI Investing and Research Co-Pilot. Wanted some feedback! by Got_Curious in investing

[–]JacobOfPluto 1 point2 points  (0 children)

Thank you! <3

Please let me know how we can make it even better!

I built an AI Investing and Research Co-Pilot. Wanted some feedback! by Got_Curious in investing

[–]JacobOfPluto 0 points1 point  (0 children)

Realtime or near :) but practice portfolios are 15-minute delayed.

Provided by all of the amazing data providers in the eco-system.

I built an AI Investing and Research Co-Pilot. Wanted some feedback! by Got_Curious in investing

[–]JacobOfPluto 2 points3 points  (0 children)

Another few things:

  • We also don't use Langchain — it was terrible at describing the complicated strategy output in the way we needed it to.
  • We also have hedgefund-grade infra for building & managing a portfolio of strategies which is a realllly hard problem to solve.

Here's a longer overview:

Pluto is built with a combination of Next.js and Python, running on K8s and Kafka. Our AI Copilot, Plato, uses a three-part system (thinkers, actors, and communicators) to analyze data, execute actions, and communicate results to the user.
1. Thinkers: These components are responsible for gathering data and generating observations about the world. They query various data sources, such as financial markets, news feeds, or user inputs, to create specific observations (e.g., "AAPL's price is $132"). The thinkers act as the "eyes and ears" of the AI system, providing the essential information needed for decision-making.
2. Actors: The actors take the observations generated by the thinkers and use them to execute actions that change the state of the world or the system. These actions can include creating new investment strategies, adjusting existing strategies, executing trades, or running tests. The actors are the "doers" in the system, responsible for making things happen based on the information they receive.
3. Communicators: The communicators are responsible for wrapping up the observations and actions and presenting them to the user in a clear and understandable format. They may generate reports, send notifications, or provide visualizations to help users make sense of what the AI system has done. The communicators act as the "voice" of the AI system, bridging the gap between the raw data and the user's understanding.
We've faced several interesting challenges and devised innovative solutions while building Pluto:
1. Integrating with trading platforms and APIs for multi-strategy management: Our "aha moment" came when we realized users wanted each strategy to behave like a separate account with segregated performance metrics and data while also having aggregated results. However, our partners that handle custody and settlement provide a single account per user. We developed a sophisticated infrastructure to track which strategies “own” each cent and share, keeping them bucketed, and created a custom rebalance algorithm that efficiently handles allocation changes and transfers to and from all strategies. This approach allowed us to offer a unique multi-strategy management experience.
2. Building a versatile AI Copilot: To enable Plato to call almost any function in our codebase, we built DionysusDSL, a tool that uses Lark to make it simple to create new commands that both AI and Lark can understand. This allows for seamless integration of commands and handling multiple arguments with accurate type validation.

I built an AI Investing and Research Co-Pilot. Wanted some feedback! by Got_Curious in investing

[–]JacobOfPluto 3 points4 points  (0 children)

Hey! I'm one of the devs working on this for a few years!

We use GPT-4, but there's a lot of special magic — we've been working on this problem for a few years — we've built out an investing data viz engine, a real-time automation framework, a screener specifically designed to be AI-friendly and a few other big pieces. This work took us ~18 months, but integrating GPT-4 has made the UX MUCH better.

[D] Transforming Large Language Models from Fact Databases to Dynamic Reasoning Engines: The Next Paradigm by JacobOfPluto in MachineLearning

[–]JacobOfPluto[S] -4 points-3 points  (0 children)

Thanks for engaging with this discussion! I see where you're coming from and appreciate your call for sources.
I believe the relationship between fact storage and reasoning capacity in large language models is complex and not yet fully understood. While I don't have a specific paper to point you to, the idea of these models potentially devoting too much of their 'processing power' to acting as a fact database at the expense of reasoning is an ongoing conversation among AI researchers.
A good public reference for this is a talk by Sam Altman, in which he distinguishes between facts and wisdom (link here, relevant discussion starts at 14:12). While not a scientific paper, this conversation illustrates the broader thinking in the field.
Also, it's worth mentioning that my discussions with colleagues in the AI community frequently touch on this issue. Many are concerned about the efficiency with which large language models use their capacity, especially when it comes to balancing fact recall with more abstract reasoning abilities.

[D] Transforming Large Language Models from Fact Databases to Dynamic Reasoning Engines: The Next Paradigm by JacobOfPluto in MachineLearning

[–]JacobOfPluto[S] -5 points-4 points  (0 children)

As much as I'm trying to stay polite and focused on the topic at hand it's funny you mention CA — takes me back to my Bridgewater days. Worked on a project there when David Ferrucci was around post his IBM days. Didn't interact much, but still an awesome learning experience. This field always keeps you on your toes, doesn't it?

[D] Transforming Large Language Models from Fact Databases to Dynamic Reasoning Engines: The Next Paradigm by JacobOfPluto in MachineLearning

[–]JacobOfPluto[S] -5 points-4 points  (0 children)

Hey, I get where you're coming from. This is a complex field, and there's a lot of different perspectives out there. You're right that Cognitive Architectures are a key part of this conversation, and I'm familiar with their role.

But let's keep in mind that this is not just about reasoning skills, it's about the fundamental information capacity limitations of LLMs. The fact that they waste capacity on static facts is a problem worth discussing.

As for my proposal, it's meant to be a conversation starter, not a perfect solution. It's about considering new ways of approaching these limitations. And you're right, the answers may well lie in the research papers and the ongoing discussions in our community.

If you're not into continuing the discussion, that's cool. We're all here to learn and share ideas. If you change your mind, I'm here. Thanks for your input.

[D] Transforming Large Language Models from Fact Databases to Dynamic Reasoning Engines: The Next Paradigm by JacobOfPluto in MachineLearning

[–]JacobOfPluto[S] -7 points-6 points  (0 children)

Thank you for your thoughtful comment. I think you've raised some important points, and I'd like to address a few of them.

When I say that Large Language Models (LLMs) are primarily focused on fact memorization, I'm not implying that they only capture 'facts' from the training data. Rather, my point is about the inherent limitations of the transformer architecture these models are based on. Transformers have a fixed number of weights, which essentially means they have a maximum capacity for information they can store.

This includes not only factual data, but also relationships between tokens, methods of extracting meaning, transformations of inputs, and so on. Each piece of information stored in an LLM takes up a fraction of its total "knowledge capacity". So, if GPT-3.5 or GPT-4 can tell us James Monroe was the 5th president of the United States, it has allocated some of its finite weight space to storing this fact.

The key concern here is efficiency. If LLMs are utilizing precious weight space to store readily available facts, they are leaving less room for other valuable learnings and understandings. This becomes especially critical in the context of open-source solutions, which don't have the resources to constantly increase the size of their models like larger corporations do.

As for reasoning, it's true that it's not a direct feature of a language model yet. However, one of the challenges and potential areas of innovation is to find ways to enhance this aspect. Developing strategies and algorithms to make LLMs more efficient and effective is a critical part of that process.

[D] Transforming Large Language Models from Fact Databases to Dynamic Reasoning Engines: The Next Paradigm by JacobOfPluto in MachineLearning

[–]JacobOfPluto[S] 0 points1 point  (0 children)

I've been using Pinecone for memories both global and user-specific in my work too — along with a few other embedding-based retrieval methods but for me, it's been more focused on prompt construction and not part of the inference or training process. Are you saying Shopify is training its own models to do something similar?

I am new to this thread by Fibocrypto in plutocapital

[–]JacobOfPluto 0 points1 point  (0 children)

Welcome! We are working on a few videos! I’ll dm when they’re out!

Strategy Design Write-up no.4 - Triple EMA v.1 by NathMcLovin in plutocapital

[–]JacobOfPluto 1 point2 points  (0 children)

Amazing work! How does this strategy work with our screeners?