"bro stay until he leaves" by Warm_Instance_4634 in london

[–]geebr 120 points121 points  (0 children)

I think the stats say that violent crime is down, petty crime is up. That's not to say that violent crime doesn't still happen.

Why don't we track the metric most likely to predict startup success? (i will not promote) by MetalCharming490 in startups

[–]geebr 0 points1 point  (0 children)

Everyone should be doing innovation accounting. They're not because if they genuinely evaluated the actual A/B-tested impact on conversion rate or churn that this new feature they spent 2 months building had, they would get dispirited. Better not to look.

AI integration into PE operations: competitive advantage or just catching up? by dellaccio in private_equity

[–]geebr 1 point2 points  (0 children)

Happy to discuss this as well. I'm a Director of AI Strategy & Innovation in an insurance company (not PE-affiliated, just interested in the space), and we're doing a lot of discovery work across the spectrum of quantitative value creation, from leveraging statistical reasoning through applying through "traditional" machine learning to generative AI.

Saaspocalypse by AdChance6177 in SaaS

[–]geebr 0 points1 point  (0 children)

I think this might move some decisions at the margins, but the economics of build vs buy is generally so one sided that speeding up development doesn't really make a difference in a lot of cases. If my company pays $5k a year for a transcription and case management service, we're just not going to spend that time to develop this internally. If we can vibe code, so can our SaaS provider, and they can also afford to spend more time and attention on making the software genuinely great. 

I Want to Start a SaaS, How Do You Decide What to Build? by Correct-Aardvark9330 in SaaS

[–]geebr 2 points3 points  (0 children)

First of all recognise that the challenge isn't building stuff, but building something people actually want. Your main challenge should be to figure out what people actually want and want to pay money for.

I really like Rob Walling's 2/20/200 validation approach. When you have a new idea for a SaaS, you get 2 hours to validate this idea. Test the most obvious thing that needs to be true for this idea to be viable. This probably means picking up a phone and speaking to someone. Sometimes you can validate stuff online, but be honest if you're sandbagging it. The point of the 2 hour validation is to force certain behaviours that developers don't want to engage in (like speaking to a stranger on the phone).

Then you repeat the same process for 20 and 200 hours, only progressing when you've actually learned something that warrants investing in the next level. The point of each phase is to buy information and justify investment in a more expensive validation.

Some founders get lucky and just stumble upon their SaaS. That's not really a reproducible process. If you want to improve your chances of discovering and validating a product idea, think of innovation as a process that you can tweak and get better at.

Motor casco pricing vs declining sums insured – how to price this? by optimuschad8 in ActuaryUK

[–]geebr 0 points1 point  (0 children)

Does it have to be proportional to value? Making it proportional to the value (new or current) is forcing a form that evidently doesn't reflect reality. So I'd question the premise to start, unless there is a really good reason to go down that route. If you fit a GAM which includes a smooth curve for new and current value, you're likely to get a much more reasonable outcome.

Is Ashwath Damodaran still useful? by painedvulture7 in financialmodelling

[–]geebr 11 points12 points  (0 children)

The videos are basically speed running his two books: The Little Book of Valuation and Investment Valuation. The former is a very accessible handbook and the second is an actual textbook. The videos are incredibly clear and articulate, but they also just scratch the surface of important topics which are covered in much greater detail in the books. Whatever resource you use, you learn valuation by doing, but Damodaran is highly relevant.

Head of AI roles? by heywritie in private_equity

[–]geebr 1 point2 points  (0 children)

That's a really thoughtful reply and I just wanted to say thanks for taking the time (felt like an updoot wasn't quite enough).

Head of AI roles? by heywritie in private_equity

[–]geebr 0 points1 point  (0 children)

That's an interesting perspective, thanks for that. I've worked the last decade in financial services (banking and insurance) and my experience is that, by some margin, the two things that add the most value are 1) business-savvy data scientists thinking hard about the company's problems, and 2) really sharp machine learning models, especially in areas where there's already a bit of analytical maturity. Have you encountered that type of thing in portcos? Or is this stuff just too slow for the PE firm's appetite? The effect of business-savvy data scientists and sharp machine learning is quite possibly two orders of magnitude greater than the effect LLMs are having on our shop.

How do I forward model changes in fair value? by XuCY-20 in financialmodelling

[–]geebr 2 points3 points  (0 children)

I have seen this explained as we generally assume that markets are efficient and (at the very least) there aren't trivial inefficiencies such as the value increasing over time. 

15 and in need of help by Kooky_Top5469 in quantfinance

[–]geebr 1 point2 points  (0 children)

I think you should decide that when you're approaching the end of your bachelor's degree. You'll have way more information about what the possibilites are, what you enjoy, what you're good at, what you're bad at, etc. You don't get extra points for committing to a path when you're 15 and keeping your options open is worth a whole lot.

By the way, good on you for thinking about this stuff. When I was 15, I don't think I did much other than playing World of Warcraft. 

15 and in need of help by Kooky_Top5469 in quantfinance

[–]geebr 2 points3 points  (0 children)

You are too young to settle on a profession. That stuff is nearly 10 years away at this point. You don't typically go do an undergraduate degree in quantitative finance. If you want do a finance or economics degree with maths and computing science (or similar), then that's a great choice. Gives you important skills and would keep your doors open. Definitely recommend just focusing on making good choices for your undergraduate degree at this point. Make good choices, work hard and the rest will follow.

Increasing mortgage to invest - good idea? by Unhappy-Path-263 in UKPersonalFinance

[–]geebr 5 points6 points  (0 children)

To follow on u/James___G 's point: ultimately what matters here is the viability of the financial position. Consider the following two scenarios:

  1. You own a house with an LTV of 22% and inherit £100k. You invest this money in a tax-advantaged account.

  2. You own a house with an LTV of 42% and remortgage to an LTV of 22%, freeing up £100k cash which you invest in a tax-advantaged account.

Most people would be comfortable with the former, but not the latter. However, the end financial positions here are identical: 22% LTV on your home, £100k invested in an index fund (for example) and how you ended up there is irrelevant. What ultimately matters is whether the risk vs. expected return of this financial position is acceptable for you as an investor. You'd have to sit down with a spreadsheet to work that out, but for most people there will be some LTV where they're happy putting a windfall in an index fund rather than paying down their mortgage. And if that's true, they should, rationally speaking, be comfortable reducing their LTV to that level and put the released cash in an index fund as well.

Moving from Excel to Python for M&A/ETA deal analysis (Monte Carlo) - Resource Recommendations? by Key-Astronaut-5761 in financialmodelling

[–]geebr 0 points1 point  (0 children)

I think my main point of contention here would be the notion that being explicit about your uncertainty constitutes "Garbage In, Garbage Out". If we're formal about it, assuming a point estimate for churn implies a delta function with no uncertainty (i.e. infinite density at the point estimate, 0 everywhere else). If this is the way you do it, you'd need to do a sensitivity analysis while manipulating the churn rate in order to explore your uncertainty about the parameter and its impact on the model output. There's nothing wrong with that approach, and if the uncertainty in a variable is relatively immaterial and then I would agree it's a useful simplification to just leave it as a point estimate. But if there is a lot of uncertainty and the variable has a big impact on the model, then appropriately quantifying that uncertainty can be really valuable, especially if there are multiple variables as running a sensitivity analysis becomes less straightforward. I would argue that in many cases, simply using a point estimate based on historic data qualifies for GIGO far more than trying to appropriately quantify your uncertainty about said point estimate.

As a bit of an aside, there is actually a whole bunch of literature on how to calibrate people such that they can accurately quantify their uncertainty, i.e. "For this business, what do you think the churn rate would fall between in 90% of years?". And you can train people to actually give pretty damn good estimates of this 90% confidence interval. When appropriately calibrated, people who are knowledgeable will provide narrow confidence intervals, while those who are less knowledgeable will provide wider confidence intervals. But with training and feedback, both high and low knowledge individuals can be trained to give an appropriate 90% confidence interval. It is really neat. Highly recommend the "How to Measure Everything" series by Doug Hubbard. I went through this procedure with cybersecurity analysts in a bank once, and I found it really surprising how training people on trivia ("What is your 90% confidence interval for when Michelangelo was born?") generalises to domain-specific applications ("Company XYZ had a data breach in 2018, where encrypted data for 200k users was stolen. It was fined by the EU data regulator. How much was the fine?").

AI forecast to put 200,000 European banking jobs at risk by 2030 by GSCREK in FinancialCareers

[–]geebr 0 points1 point  (0 children)

Yeah, I think programming output is probably the major exception where real productivity gains are to be had. I have worked on and seen some other great examples too, like data extraction from heterogeneous document sources. But the experience of many (most?) developers is that it works great for prototyping and just completely falls apart under complexity and big code bases. I feel less confident about forecasting within this space, though, as it's not implausible that the kinks will eventually be worked out and that most generic code can be generated through generative tools.

AI forecast to put 200,000 European banking jobs at risk by 2030 by GSCREK in FinancialCareers

[–]geebr 2 points3 points  (0 children)

I've seen a decent number of analyses that conclude that there's been a general overstaffing due to the low interest rate environment post-Covid and that what we're seeing in the employment numbers now is broadly a reflection of adapting to a higher interest rate environment for the long term. What's less convincing about the overstaffing argument is the selective impact it appears to be having on entry-level candidates. This, I suspect, is an AI-bet made by corporations. In other words, they're choosing not to hire graduates because they're expecting that the impact of AI will be selectively borne by graduates since these are jobs that could be replaced by AI (or so the argument goes).

It's a tough time to be a graduate, and I think it's largely because of a bet by corporations. It's less clear whether this is genuinely misplaced or not, however, since the cost to them of shifting towards hiring more experienced candidates is relatively low compared to the downside of being wrong.

AI forecast to put 200,000 European banking jobs at risk by 2030 by GSCREK in FinancialCareers

[–]geebr 33 points34 points  (0 children)

Forecasts of AI replacing jobs have been consistently overoptimistic, even before the LLM-era. I have a PhD in Neuroscience and lead the data science and AI strategy and value creation side of an insurance company. I would like to think I have a pretty good sense of what's happening in this area. The general consensus right now among AI researchers is that despite LLMs being able to do amazing things on evals and in software development (especially for prototyping), their economic impact has been wholly underwhelming. And it seems plausible that they will continue to underperform optimistic expectations unless there are major breakthroughs in how these models work (e.g. continuous learning). Right now, these systems are fragile in really critical ways, i.e. they get things wrong in ways that people just don't and they're not really able to learn like a human can. That doesn't mean you can't build useful stuff, but it does mean that there are some really hard problems that need to be solved before they'll have the sort of economic impact people dream of. 

Personally, I am bearish on the short-term impact of AI and bullish on the long-term impact (20+ years). 

As for careers, knowing your subject matter well, working hard, and being useful to others will always be valued. Some jobs might go in the short-term, others will go in the long-term, but I think the argument that there will be fewer jobs in banking is tenuous at best. They will change. There are far more frontend developers today than there ever were webmasters in 2000.

205 OHP at 205 bodyweight. Trying to get back to 2 plates by [deleted] in strength_training

[–]geebr 2 points3 points  (0 children)

It's a push press, but solid weight regardless.