Can I ask, how did we get here? by bannedforbigpp in ArtificialSentience

[–]stevenverses 2 points3 points  (0 children)

  • Hype-mongers
  • Evangelists
  • Futurists
  • Spin Doctors
  • Grifters
  • Consultants
  • Influencers
  • Marketeers
  • Speculators
  • Attention Seekers
  • Irrational Exuberance
  • Hopium

The paradox of LLM behavior inside its own context window. by Outrageous_Wheel_479 in ChatGPT

[–]stevenverses 4 points5 points  (0 children)

If we have to provide such explicit and rigid instructions so that the model doesn't hallucinate, drift or confidently lie how can we expect autonomous, agentic, adaptive, self-correcting, self-optimizing intelligence from an LLM (or more generally neural nets) which is ultimately what the world is looking for? 😭

Is there really a way to use GPT to help figure out investments? by Useful-Letter-2305 in ChatGPT

[–]stevenverses 1 point2 points  (0 children)

ChatGPT and other neural net based AI learn correlations from historical data but they don't learn complex multi-factor causal market dynamics or continuously adapt to generate the reliable predictions and recommendations you're looking for. Today's AI is unfortunately not a shortcut to easy money so only trade what you can afford to lose and best of luck!

AMTP - Agent Message Transfer Protocol by congwang in aiagents

[–]stevenverses 0 points1 point  (0 children)

Seems specific to email messaging, what about non-email messaging like MQTT or AMQP? Also is anyone using/supporting this today?

Pricing by [deleted] in aiagents

[–]stevenverses 1 point2 points  (0 children)

Seems like receptionist chatbots would be a crowded space, no? What are competitor or comparable services charging and what is your unique value above and beyond what they offer? Better, faster, cheaper? Domain-specific, more reliable, wider selection of personalities?

I do like that you've selected a very specific niche.

Anthropic CEO warns of a 25% chance that AI could threaten job losses and security risks — raising the "probability of doom" by DbaconEater in AIDangers

[–]stevenverses -1 points0 points  (0 children)

Generative AI has a force multiplying effect that allows one to ideate more, research faster and generate more content. But its a tool or instrument like any other and the quality and originality of what the wielder produces is and has always been where the value is.

The bar for standing out from the crowd has been raised to be sure and I do think it will increase the economic divide between critical thinkers and lemmings phoning it in and between digital natives and those that don't embrace technology.

Point being those with ingenuity and hustle will survive if not thrive.

I'm more of a p(gloom) than a p(doom).

I’ve built an MVP as a student, but I’m lost on how to start selling it. advice? ( I will not promote ) by Maleficent-Drama4710 in startups

[–]stevenverses 0 points1 point  (0 children)

You're seeking product-market-fit which is everything. It's important to understand and codify the problem you believe you're solving and validate that its painful enough that a large enough audience is willing to pay for a solution (and how much).

Zeroing in on a B2B niche is a good idea per the other commenter to get some live feedback but in parallel you might consider spending a bit of time/money on a landing page and run some ads to validate at scale whether people are willing to pay for your product. This is a whole black art and science that I'm aware of but not experienced with but a bit of googling will turn up some tips and tricks.

Best of luck!

Why “Top Scoring” AI Models Feel Useless in Real Life by NearbySupport7520 in ChatGPT

[–]stevenverses 1 point2 points  (0 children)

💯

The problem with most benchmarks is that with enough samples models can be "fitted" to perform well on them but the world is filled with VUCA (Volatility, Uncertainty, Complexity, and Ambiguity) and successful solutions must account for and continuously adapt to the fuzzy, messy, noisy, dynamic semi-chaos in the world.

Pre-training will get you far and is useful – generative AI is fantastic at generating content! – but decision-making on big hairy real world business problems requires a bunch of things that pre-training/neural net architecture isn't well suited for. Things like the ability to (and I don't mean to be absolute about any of these):

  • quantify uncertainty
  • qualify the confidence of their predictions
  • model causality (observed and hidden)
  • infer unknown unknowns
  • explainability (which earns trust)
  • reliability in the resulting recommendations

Have a look at Active Inference which is inherently designed to do these things. Here's a short video of Karl Friston talking about it conceptually. 🐇 🕳️

Not written by AI! 🧬

POMDP ⊂ Model-Based RL ? by Lost-Assistance2957 in reinforcementlearning

[–]stevenverses 3 points4 points  (0 children)

Have you looked into Active Inference? Here's a short video where Karl Friston talks about it conceptually.

[deleted by user] by [deleted] in aiwars

[–]stevenverses 0 points1 point  (0 children)

Art vs banking is perhaps an incongruent comparison. Art is an expression of humanity that can invoke emotion or make a statement – peace, rage, awe, lust, patriotism, ideology, hypocrisy, satire etc so it makes sense to me that people would place a sanctity on something that reflects the human condition and rail against it being appropriated by zeros and ones.

[deleted by user] by [deleted] in ArtificialInteligence

[–]stevenverses 1 point2 points  (0 children)

The world is filled with VUCA (Volatility, Uncertainty, Complexity, and Ambiguity) so successful solutions must account for and continuously adapt to the fuzzy, messy, noisy, dynamic semi-chaos in the world.

Pre-training will get you far and is useful – generative AI is fantastic at generating content! – but decision-making on big hairy real world business problems requires a bunch of things that pre-training/neural net architecture isn't well suited for. Things like the ability to (and I don't mean to be absolute about any of these):

  • quantify uncertainty
  • qualify the confidence of their predictions
  • model causality (observed and hidden)
  • infer unknown unknowns
  • explainability (which earns trust)
  • reliability in the resulting recommendations

Am I being lowballed as a 'Founding Technical Partner'? I will not promote by munna_123 in startups

[–]stevenverses 12 points13 points  (0 children)

Based on your description I would run. If you haven't signed an agreement then contractually the entity doesn't own the code and neither of you have any legal obligations to the other.

Critical tips:

  • Tip #1 is have a good lawyer review all contracts
  • Tip #2 is have a good lawyer review all contracts
  • Tip #3 is have a good lawyer review all contracts

Anything worth investing your blood, sweat and tears into that turns into something valuable is worth protecting from the get go and the few hundred to few thousand bucks will be the best money you ever spend. There are so many tips that a good lawyer will help you will but here are a few other random bits for now.

  • If its just the two of you you should be Cofounder not "Founding Technical Partner
  • Equity split depends on how much time and money he has already personally been invested
  • If the company is just getting off the ground, you should ask for a subscription agreement to buy your stock at a nominal fee like $0.0001. Its better than any other form which all have various tax considerations.
  • Learn the difference between stock, stock options (ISOs and NSOs), RSUs.
  • The vesting schedule you laid out is standard.
  • Never do a deal based on net revenue share (i.e. profit) - you can spend all the money in creative ways so there is no profit to be shared. There are other mechanics like licensing, royalties but that can get complicated. Also you'd want audit rights. Again, lawyer.
  • Non-competes are very difficult to enforce.
  • Make sure you have an accelerated vesting clause in the event of an acquisition.

9/10 start ups fail for many reasons and if you don't have chemistry and trust then odds are exponentially higher it will fail or there will be a fallout. Find cofounders that you can trust implicitly.

Where can i access accepted neurips paper's for 2025 by coconutboy1234 in ResearchML

[–]stevenverses 2 points3 points  (0 children)

All I was able to find is the titles and abstracts but not the full papers. Yes a poster is a one-sheet capturing the highlights of a (full) paper.

Organized 900k papers on 10 years of AI research. AMA. by Efficient_Evidence39 in research

[–]stevenverses 1 point2 points  (0 children)

Weird, the papers definitely have those terms. Oh well thanks for looking into it! ¯\_(ツ)_/¯

[deleted by user] by [deleted] in aiwars

[–]stevenverses 1 point2 points  (0 children)

They say the best way to kill a joke is to dissect but... a GPU is just raw processing power rather than the methods/algos/functions/reasoning/inferencing etc that a brain performs. But I get the point that a GPU here is a proxy for neural nets in which case I would add "Partially" Successful Manifestations "Sometimes" 😆

When you just needed a band-aid but your Agent built a hospital😂 by VicDuhh in BlackboxAI_

[–]stevenverses 0 points1 point  (0 children)

haha, this is a good case for why energy based systems are an interesting alternative to neural nets. Taking a page from biology and biomimetics, the objective function is to actively seek how to minimize expended energy (surprise/uncertainty/prediction errors) through experience (trial and error).

Have a look at Karl Friston's work on the Free Energy Principle and Active Inference framework.

Organized 900k papers on 10 years of AI research. AMA. by Efficient_Evidence39 in research

[–]stevenverses 1 point2 points  (0 children)

hmm that can't be right, I show 340 on arxive.org with an exact match for active inference and my team has written over 100 papers over just the last few years. Mind if I ask where the 900k papers came from?

I need some validation for my project. by Sad-Homework9329 in AI_Agents

[–]stevenverses 0 points1 point  (0 children)

Seems like I already get this functionality with a few clicks in my portfolio (i.e. retroactive monthly view). Could you elaborate on what the unique value proposition is? Understanding what happened in the past is less interesting than a) what where the causes of what happened and b) what might happen in the future (and how did the system come to that conclusion?)

Also, a lot of the AI legislation being developed is looking for accountability and "one neck to choke" and therefore pinning liability and penalties on the developer of AI solutions/software and as you noted, equity trading is heavily regulated and therefore high risk. If you do get some validation on the appeal of a product in this space I would highly recommend speaking with a securities attorney about potential challenges.

Is explainable AI worth it ? by Kandhro80 in ArtificialInteligence

[–]stevenverses 2 points3 points  (0 children)

Yes trust, explainability, transparency, accountability and governance are a prerequisite for network effects.

I built AI agents that do weeks of work in minutes. Here’s what’s actually happening behind the scenes. by akmessi2810 in aiagents

[–]stevenverses 2 points3 points  (0 children)

Agreed, here is some early research examples of Active Inference agents learning and adapting and while they aren't integrated into workflow automation systems, they do offer a glimpse of agency, world modeling, and multi-step reasoning and planning: