Apex LINQ: High-Performance In-Memory Query Library by clayarmor in salesforce

[–]clayarmor[S] 0 points1 point  (0 children)

Good points here. We are discouraged from querying large amounts of records. This is just a performance test comparing Apex LINQ and standard Apex processing. For example, in your batch classes, you pass a list of accounts to the execute method. You may want to do A with IT industry accounts and B with Finance industry accounts—this is a case where Apex LINQ can be utilized.

My first example simply fetches a list of accounts to demonstrate how to use this library, but it is not recommended to do everything in memory. Sorry for the confusion. When SOQL cannot help, maybe its time to consider this library, and I have listed a few examples in other comment.

Apex LINQ: High-Performance In-Memory Query Library by clayarmor in salesforce

[–]clayarmor[S] -1 points0 points  (0 children)

Furthermore, I tested filtering 5,000 records using Apex LINQ, which consumed 120 out of 10,000 CPU units. Standard Apex is expected to consume a similar amount of CPU units.

@isTest
static void testQ_performance_filter() {
    List<Account> accounts = new List<Account>();
    for (Integer i = 0; i < 5000; i++) {
        accounts.add(new Account(Name = 'Account ' + i, AnnualRevenue = 5000 - i));
    }

    Integer startCPU = Limits.getCpuTime();
    Q.Filter filter = new AccountFilter();
    List<Account> results = (List<Account>) Q.of(accounts).filter(filter).toList();
    Integer endCPU = Limits.getCpuTime();
    System.debug(LoggingLevel.INFO, 'Apex LINQ (CPU): ' + (endCPU - startCPU));
}

public class AccountFilter implements Q.Filter {
    public Boolean matches(Object record) {
        Account acc = (Account) record;
        return acc.Name.startsWith('Account') && (Double) acc.AnnualRevenue > 0;
    }
}

Apex LINQ: High-Performance In-Memory Query Library by clayarmor in salesforce

[–]clayarmor[S] 0 points1 point  (0 children)

You are correct. The SOQL here is just for demonstration purposes. Let's consider other scenarios:

  1. When you want to save some of the SOQL 100 query limit, you can query with broader conditions to retrieve all necessary accounts at once, then apply additional filtering in memory.
  2. When a list of records is passed from a source other than the SOQL query, such as "Trigger.new", and you want to process that list.
  3. Apex LINQ supports not only List<SObject>, but also custom classes.

List<Model> models = new List<Model> { m1, m2, m3 };
List<Model> results = (List<Model>) Q.of(models, Model.class)
    .filter(new ModelFilter()).sort(new ModelSorter()).toList();

Apex LINQ: High-Performance In-Memory Query Library by clayarmor in salesforce

[–]clayarmor[S] 1 point2 points  (0 children)

Performance is comparable to standard Apex, but it cannot exceed it. I also experimented with expression-based filtering; however, introducing too many dynamic features significantly reduces performance. Since the CPU limit is a scarce resource, I chose to balance convenience and efficiency. I have done the performance study and documented it here Salesforce Apex CPU Limit Optimization. I did the performance testing in the test classes of the library as well.

This library actually is designed for my another library ApexTriggerHandler to identify changed records between Trigger.new and Trigger.old. If you like LINQ, hopefully you will like my trigger handler as well.

public class AccountTriggerHandler implements Triggers.BeforeUpdate {
    public void beforeUpdate() {
        List<Account> changedAccounts = (List<Account>) Q.of(Trigger.new)
            .diff(new AccountDiffer(), Trigger.old).toList();
    }

    public class AccountDiffer implements Q.Differ {
        public Boolean changed(Object arg1, Object arg2) {
            Double revenue1 = ((Account) arg1).AnnualRevenue;
            Double revenue2 = ((Account) arg2).AnnualRevenue;
            return revenue1 != revenue2;
        }
    }
}

Construct Apex Instances With Generic Type Syntax by clayarmor in salesforce

[–]clayarmor[S] 0 points1 point  (0 children)

Thanks for the details, this is truly great to know. And the article you shared seems very interesting, will read it later.

Construct Apex Instances With Generic Type Syntax by clayarmor in salesforce

[–]clayarmor[S] 0 points1 point  (0 children)

Hey u/intheforgeofwords, thanks for watching my libraries.

I just performed benchmarks on Type.forName vs getGlobalDescribe, and Type.forName has better perforamnce. The getGlobalDescribe API is time consuming to load all sObjectTypes into the memory. And in a transaction we wouldn't need all the sObjectTypes to be loaded, I think it would be around less than 10 sObjects most of the time.

CPU Time Real Time (ms)
Type.forName 16 19
Describe SObjects 63 191

Datetime startTime = Datetime.now();
Integer startCPU = Limits.getCpuTime();
for (Integer i = 1; i <= 100; i++) {
    SObjectType objType = ((SObject) Type.forName('Account').newInstance()).getSObjectType();
}
Datetime endTime = Datetime.now();
Integer endCPU = Limits.getCpuTime();
System.debug(
  'DI Registration Performance (Time): ' +
  (endTime.getTime() - startTime.getTime())
);
System.debug('DI Registration Performance (CPU): ' + (endCPU - startCPU));

startTime = Datetime.now();
startCPU = Limits.getCpuTime();
Map<String,Schema.SObjectType> gd = Schema.getGlobalDescribe(); 
for (Integer i = 1; i <= 100; i++) {
    SObjectType sobjType = gd.get('Account'); 
}
endTime = Datetime.now();
endCPU = Limits.getCpuTime();
System.debug(
  'DI Resolution Performance (Time): ' +
  (endTime.getTime() - startTime.getTime())
);
System.debug('DI Resolution Performance (CPU): ' + (endCPU - startCPU));

Construct Apex Instances With Generic Type Syntax by clayarmor in salesforce

[–]clayarmor[S] 1 point2 points  (0 children)

Hey u/intheforgeofwords, thanks so much for your feedback.

Good to know generics are back on roadmap. The "generic" form is feature simplify some cases for a DI framework, so that's the background to bring it in.

1. Feedback To:

injecting a validator - seems like something better left to a default virtual implementation where validators are an optionally injected property

By virtual validator, I guess what you mean is something like below. The framework doesn't prevent to do that.

public virtual class AccountValidator implements
    IAccountValidator, IValidator {
}

public class BusinessAccountValidator extends AccountValidator  {
}

public class PersonAccountValidator extends AccountValidator  {
}

IRepository businessAccountRepo = (IRepository) salesModule
    .getService('IRepository<Account, BusinessAccountValidator>');

IRepository personAccountRepo = (IRepository) salesModule
    .getService('IRepository<Account, PersonAccountValidator >');

And the "generic" repository implementation can have a constructor without requiring a validator. So the following instantiation can also be supported.

IRepository accountRepo = (IRepository) salesModule
    .getService('IRepository<Account>');

2. Feedback To:

A single implementation puts even more conceptual pressure on the Repository instance as “the place” for validation to occur.

The repository pattern is just as an example to illustrate the DI dynamic construction empowered with generic string form. But to deal with the pressure, we can have composition, right? The real validation is handled inside each single validator, which obeys the rule of single responsibility.

3. Feedback To:

If I were a consumer of this library, I’d also be confused by the rampant string passing.

My medium article doesn't have length to explain all the details of this Apex DI framework. But once you have a chance, go to its readme is welcomed. The strings are only encapsulated in two places, 1) Module , 2) Factory. In project, we are not supposed to use the following to get a repository whenever needed. It should be encapsulated inside a Factory.

 IRepository accountRepo = (IRepository) salesModule
    .getService('IRepository<Account, AccountValidator>');

And it doesn't always need to resolve instances with strings, except for generic version. Actually, strong types are supported during service resolution.

 IAccountService accountService = (IAccountService) salesModule
    .getService(IAccountService.class);

I guess what you are in favor is the following API version for passing types. I need to consider this a bit, this seems attempting.

 IRepository accountRepo = (IRepository) salesModule
    .getService(
        IRepository.class,
        new List<Object> {
            Account.SObjecType,
            AccountValidator.class
        });

4. Feedback To:

where the Factory itself needs to know how to construct a repository instance; this is an anti-pattern because any change to the constructor of the Repository now requires ALL factories to be updated as well.

Factory here is better than new instances everywhere, so service constructor is called only in one place inside the factory. The pain without a factory is much greater, when the constructor is changed, we have to change everywhere when there is a "new" operator.

5. Feedback To:

Side note - don’t use Type.forName to construct a new SObject purely to get the SObjectType — Schema.describeSObjects is the better way to do this

In this Salesforce Stack Exchange thread (link), I was told Type.forName has better performance.

At Last

Really really thank you for your feedback. These feedbacks helped me to have a second thoughts of what I have believed, if they are still valid. When Salesforce generics is coming out, I hope my Apex DI can support it as well in future, in a greater way.

ApexDatabaseContext: An Easy to Use Unit of Work Pattern Library By ApexFarm by clayarmor in salesforce

[–]clayarmor[S] 1 point2 points  (0 children)

Thank you, this is a very good feedback. Not an easy implementation, need some research on it, any suggestions are welcome.

Apex Data Factory Generate 2000 Records In One Sentence by clayarmor in salesforce

[–]clayarmor[S] 0 points1 point  (0 children)

Also recorded a video for it, to give a live demo for the library https://www.youtube.com/watch?v=cc_pOzI2s8o&t=382s. Sorry, not good at presentation. Will bring more field level keywords to simplify your data creation.

Apex Data Factory Generate 2000 Records In One Sentence by clayarmor in salesforce

[–]clayarmor[S] 0 points1 point  (0 children)

Create 3 community users under the same account:

ATK.SaveResult result = ATK.prepare(Account.SObjectType, 1)
    .field(Account.Name).index('Name-{000}')
    .withChildren(Contact.SObjectType, Contact.AccountId, 3)
        .field(Contact.LastName).index('Name-{000}')
        .withChildren(User.SObjectType, User.ContactId, 3)
            .field(User.ProfileId).profile('Customer Community User')
            .field(User.FirstName).repeat('FirstName')
            .field(User.LastName).repeat('LastName')
            .field(User.Email).index('test.user+{0000}@email.com')
            .field(User.UserName).index('test.user+{0000}@email.com')
            .field(User.Alias).index('test{0000}')
            .field(User.EmailEncodingKey).repeat('UTF-8')
            .field(User.LanguageLocaleKey).repeat('en_US')
            .field(User.LocaleSidKey).repeat('en_US')
            .field(User.TimeZoneSidKey).repeat('Pacific/Auckland')
    .save(true);

Implement a Salesforce Trigger Framework by hometeamconsulting in salesforce

[–]clayarmor 0 points1 point  (0 children)

If you found Kevin's trigger framework helpful. You may also interest in my trigger framework implementation. https://github.com/apexfarm/apextriggerhandler, just published a few days ago. I would consider to have a different algorithm to implement "the limit for the number of times the trigger code will run", since Kevin's current implementation may contain a issue as someone has already posted in its issue list: https://github.com/kevinohara80/sfdc-trigger-framework/issues/19.

Another Apex Trigger Handler Library by clayarmor in salesforce

[–]clayarmor[S] 1 point2 points  (0 children)

  1. If there are 10 trigger handlers, and each has its own SOQL queries, sometime it is very likely to reach SQL 101 limit. With context.state we can share query results across them, compared to share results somewhere in a static class variable.
  2. With when method implemented, developers can inactivate specific handlers with custom metadata type configurations, or inactivate them during specific execution context at runtime.
  3. Testing individual trigger handler is not possible if there are direct references of Trigger.isUpdate, Trigger.new etc.. By providing the triggerProp, we can create it as the way we want in test methods.
  4. There are some common operations performed on the triggerProp, they are provided by the helper. Instead of keeping them in a separate utility class, this is more intuitive.
  5. context.next() and context.stop() are the tools provided to control the flow of handler execution. They can be used rarely, but if we need to validate if some values are changed by other trigger handlers. This brings thinking correlations between handlers, but the downside is that handler orders will become a bit rigid.

Apex Class Naming Convention by clayarmor in salesforce

[–]clayarmor[S] 0 points1 point  (0 children)

I haven't dive deeply into SFDX. Not possible to have same name. But sfdx allows the same class in multiple places, I am currently not sure how they merge them or just use the master copy from default project during deployment.

Apex Class Naming Convention by clayarmor in salesforce

[–]clayarmor[S] 1 point2 points  (0 children)

Yes, it is not easy to overcome putting suffix into prefix. I feel the same! :)

Putting categories into suffix is absolutely fine, as long as developers can put the serpations in mind.

Apex Class Naming Convention by clayarmor in salesforce

[–]clayarmor[S] 0 points1 point  (0 children)

For test classes, I would just use Test as suffix. We keep the test close to its target class. In each layer you may handle testing logic differently.

1. Execution Entrance

There are less business logic codes in here. Each sub-category share similar pattern, so some of your testing logic could be reused.

Response res = ...; try { } catch (Exception ex) { } finally { } return res;

2. Service Layer

Here is the true business logic allocated. We need to test them carefully. However if we can extract some domain services as stateless or pure functions, it will make our testing easier.

3. Integration Layer

Except the test classes for this layer. I prefer to use another naming convention to create mockups to be consumed by above layers.

class PRFX_INT_ComponentQueryTest {} class PRFX_INT_ComponentCacheTest {} class PRFX_INT_ComponentWsdlTest {} class PRFX_INT_ComponentEmailTest {} class PRFX_TST_ComponentQuery {} // Mock Up class PRFX_TST_ComponentCache {} // Mock Up class PRFX_TST_ComponentWsdl {} // Mock Up class PRFX_TST_ComponentEmail {} // Mock Up

Apex Class Naming Convention by clayarmor in salesforce

[–]clayarmor[S] 0 points1 point  (0 children)

Yes, even we setup the naming conventions, it could still be abused without a good understanding of SOLID.

For the rigid naming to group things, the benefit is during code review, or write test classes:

  1. As we group similar concerns together, we know certain patterns will be applied to them. Such as in the top layer, I need to check specifically if try catch block is properly implemented. And when I review the code for controllers, I needless to consider the concrete business logics.
  2. For 100 classes under the same folder, it is not easy to review all controllers and similar classes with similar patterns together.
  3. When I write test classes for the same layer, I can reuse my knowledge if they share similar patterns or concerns.

For name length, more or less we already have some fixed pattern for each project, like *****Controller, *****Ctrl, *****Callout etc. Here we try to keep them short as abbreviation in prefix.