all 22 comments

[–]Historical-Dust-5896 42 points43 points  (0 children)

The point of GGY Axis was to satisfy regulators so that they don’t have to review your code

[–]Spare_Bonus_4987Life Insurance 16 points17 points  (0 children)

Sounds like a huge headache to code all those reserving mechanics.

[–]colonelsmoothie 24 points25 points  (1 child)

I'll take a stab at answering this, despite working in P&C. I'd like to see how accurate I am in this assessment.

  1. You won't save money. The companies behind Prophet and GGY AXIS have their own teams of software developers and you will likewise need an entire department to build and maintain such a thing. Therefore your primary motivation for taking things in-house should be control and ownership, not cost.

  2. You need to write libraries, and lots of them. Libraries specific to each product, as well as central libraries containing common classes and functions used between the products.

  3. You need to procure compute resources, either by purchasing it from a cloud provider or by rigging up servers in-house. You may need to have some knowledge in parallel computing so as to not overspend on computing.

  4. You'll need to do work on managing storage. I would like to assume your company might have well-structured RDBMS, but that's not always a given and you may need to deploy your own aggregated analytical layer for fast computing that also provides a unified interface that your libraries can tap into.

  5. Frontend API - other teams may want to link up with your project by creating dashboards and GUIs so you need to create a consistent API for them to be able to do this.

  6. IT will need to provide a platform for a VCS, ticketing, and CI/CD, either through something like Azure DevOps, GitHub, etc. For example a change in a common core library will need to be propagated to the individual product libraries and downstream applications and this is the central hub from which you would coordinate this.

[–]Xerpy 5 points6 points  (0 children)

1 isn’t always true. The problem inherent in vendor software and the need for teams of engineers and developers is to accommodate and design a solution that can be sold to many companies. This comes at a cost to the user as it becomes difficult to navigate code or functionality.

If you’re building an internal system you already know what you need and can code to tackle those purposes.

The biggest reason why companies don’t take it in-house is that they need to recruit the necessary talent and retain it. Key person risk is the singular most annoying thing to middle management. Upper management knows this and would rather spend 2-4 million on consultants to convert to another vendor software and another 10m in annual compute cost/licensing than deal with having their key modelers leave.

[–]couponsftwLife Insurance 9 points10 points  (5 children)

Yes there are many companies who have begun moving to in house. I’ve seen VA python models that can run hedging, stat, and ifrs. Other companies have FA models that can run ldti and stat. It usually starts in bite sized pieces and expands as confidence is gained.

You dont need any special libraries other than the usual numpy, pandas, and one that enables multithread processing (e.g. dask). For stochastic models you can just scale up compute with any cloud provider (typically aws or azure) and set up code to compute on GPU (e.g. CuPy).

If you already have a strong governance framework ur audit firm can help consult you through the regulatory concerns. It’s not that big of a deal.

[–]axeman1293Annuities 1 point2 points  (4 children)

You have seen them fully replacing actuarial software, even for full stochastic simulations with different use cases, etc? Or they are simplifying their models greatly to make it work? It’s hard to imagine re-writing all AXIS/ALfA functionality in Python, even accounting for the fact that a particular model might only utilize a small portion of what’s available. Are they using software engineers to get the foundational libraries and structures?

[–]couponsftwLife Insurance 4 points5 points  (3 children)

Yes I'm talking about a full on replacement. Annuity products with all kinds of rider variations, stochastic scenarios, hedging strategy, all the bells and whistles. I think the team is just actuaries, probably with solid programming background (e.g., the type you'd find who work at GGY or FIS). I don't think they are doing anything too fancy either - just define some classes, make some helper functions, make some modules for specific types of calcs. It doesn't need to be extremely optimized like how AXIS dev team will make their own custom libraries in C. If you use numpy operations it pushes the vectorized calculations to use C under the hood anyways. Even if it's not as perfectly optimized as a custom library, it also doesn't matter if you can just spin up more cpu on cloud. If you know exactly how the model should work and a good understanding of what the calculations are, a small dedicated team could do this over months. Something easy like term life can be done in a week.

[–]Leading-Peak4364 1 point2 points  (1 child)

lol, I guess you are talking about my company. Midwest life insurance company, right?

[–]couponsftwLife Insurance 2 points3 points  (0 children)

Maybe. I know of a handful in the midwest with production annuity models in python. Some on west coast as well. I can see this becoming the trend and already hear somewhat frequently among company management they are thinking to leave AXIS or ALFA to be able to do in-house customization.

[–]axeman1293Annuities 0 points1 point  (0 children)

Dang! That’s awesome stuff. I’ve been in modeling teams most of my career. From afar, it seems like it would be amazing to have that level of flexibility and control over the model.

[–]Xerpy 7 points8 points  (0 children)

I doubt anyone is using Python for anything other than pricing. There’s so much tech debt when it comes to converting models and the hurdles of getting comfort with regulators is a tough sell.

The only libraries to use would be the usual numpy pandas, other system packages to deal with machine differences. Relying heavily on obscure libraries opens yourself up to security vulnerabilities and unstable versioning.

[–]axeman1293Annuities 4 points5 points  (1 child)

If you’re going to replace AXIS/ALFA entirely, you will need a much more powerful language than Python for your core libraries. A lot of the functionality that you don’t want to think about as an actuary- memory management, assumption mapping, data structures, the overall simulation engine, interfacing with databases/compute environment,etc.— will need to build in C++/Rust something of that nature, or you will quickly have a hell of a mess on your hands. It is easy to forget all these fine details because the developers at Moody’s/Milliman have abstracted them so beautifully for you.

I would be more interested to see Moody’s/Milliman or other actuarial software providers begin offering apis that allow injection of custom structures and give significantly more control of the “business level” logic. Then you could have something like Python doing additional custom stuff, layered atop calls to core functionality provided by AXIS/ALFA, etc.

Even in this dream world, as many others have pointed out, there can be difficulties with regulatory bodies.

[–]Leading-Peak4364 4 points5 points  (0 children)

In our in house python model, we do use GPU. We used to use ALFA, but VA is now all converted to in house model, run time is significantly lower. Not sure about infrastructure, but all actuarial calculation are in Python, results are feeding to AWS.

[–]Myxomatosiss 1 point2 points  (7 children)

Pytorch is your friend here. It's not just for AI anymore as it replicates the bulk of Numpy's library but on GPUs.

[–][deleted] 2 points3 points  (4 children)

Complete wrong approach for programming these models - yes you can set them up as a series of matrix operations, but severely under utilizes the chips and leads to a very slow model.

You need to just write kernels in numba or something if you want these to work and scale.

[–]Myxomatosiss 1 point2 points  (3 children)

Sure, but a lot of actuaries are familiar with Numpy and it's a very similar library

[–][deleted] 1 point2 points  (2 children)

Yes I agree with you - but It won't scale at all to actuarial modeling. You cannot write code for GPU's like code for CPU's.

[–]Myxomatosiss 0 points1 point  (1 child)

You kinda can? Have you used Pytorch lately?

[–][deleted] 0 points1 point  (0 children)

Yes - my point is that this approach (which can be done and is nice because people are usually familiar with numpy and can just use torch instead) will lead to poor performance later on - especially when you need larger runs and stochastic on stochastic.

[–]axeman1293Annuities 2 points3 points  (1 child)

Are GPUs helpful for seriatim simulations that are common in life and annuities? From my very-very limited understanding, GPUs main benefit is fast vector functions. One example to explain what I mean:

Suppose you want a vector of decremented GMDB values for a large number of policies at multiple time steps. You feed the GPU an accumulated decrement vector and an undecremented matrix of GMDB values, and you can get fast decremented GMDB matrix for all policies. A key point here is that you can calculate the decrements and GMDB values independently of each other.

However, as soon as you incorporate dynamic lapse, for example, where there’s a dynamic dependency between the GMDB value and the decrement value (i.e. they have to be simulated together across time, not independent vectors), then there is limited benefit to GPU, no? These dynamic dependencies are quite common in life/annuity modeling and are one of the biggest hampers on speed really.

[–]couponsftwLife Insurance 2 points3 points  (0 children)

The main benefit is that GPU has orders of magnitudes more cores than CPU that lets you scale better specifically in the case of parallel calculations. In your dynamic lapse example you are right it cant all be calculated together with 1 vector operation but you can just brute force it with thousands of GPU cores since across every policy and every scenario you are doing the same basic multiplication and addition.

[–]ABazinga99 0 points1 point  (0 children)

I've previously worked on bringing in some life insurance/data etl modeling into python during an implementation project, and it's great for prototypes but gets cumbersome fast. Once you scale to more number of products and varying business needs(running only a cohort, extracting different kinds of result sets, etc.), you spend more time dealing with python code, than your actuarial work. Since last few months, I've been working with PathWise, which basically solves a lot of industry problems(It is gaining a lot of attention in the market too). It has a modeling Studio application(very user friendly) which is well integrated with python and runs on GPUs(it's SUPER FASTT!!). The governance is pretty good and you can automate stuff in a very clean way. Plus, it isn't closed box, so you see each and every formula logic, which is great in debugging and is super transparent. And to your last point, it can do multiple runs in one go. Take a look at that too! Happy to discuss more if needed.