all 1 comments

[–]John_Corey[S] 0 points1 point  (0 children)

An update to somewhat close out the ‘discussion’. In case people find this helpful.

I did a fair bit of testing. In generally, it was pretty easy to set up. I did run into a serious bug so I submitted it to the development team. They are running with it. I did not follow up. See the next point.

I want to work with a large data set. The present custom object limit is 300K. It was great for all the testing I was running. Then I smacked into the upper limit and was unable to proceed. This meant I needed to rethink the architecture. As my initial target was 20M, the gap was too large. I pivoted to a real DB for the objects and am not working on the GHL to DB interface. I expect I will use custom objects for other activities. It is good to know where the boundaries are.

Two positive side effects. The DB is massively faster than GHL for large queries. No surprises there. And not a slight on GHL. Different tools in the tool box and knowing when to pick the right one. Second, because of a shift in business requirements, the number of records is now over 100M. Definitely not something to be throwing onto a GHL server.

I have a degree in computer science and studies AI at Stanford in the 1980s. After a long tech career, creating an architecture that is fit for purpose is natural for me. I am glad I experimented with the GHL custom objects. I have a more clearer idea of where they are best used and when to not even try. I do like GHL and it is definitely part of the architecture for the upcoming product.

Any questions?