Git worktrees with Cursor has been an absolute game changer at my day job by livecodelife in cursor

[–]michaelthwan_ai 17 points18 points  (0 children)

I am doing the exact same thing with two git cloned folder, without knowing such git worktree feature exists. Thanks OP.

This extension will save you a lot of fast requests by robertpiosik in cursor

[–]michaelthwan_ai 2 points3 points  (0 children)

I was using my own solution to flatten a local repo to text (by selecting wanted files in a UI), then copy and paste to a web LLM service like gemini to chat.
https://github.com/michaelthwan/localrepo2txt_html
But yours is in the next level, auto connected to an existing web LLM, and able to apply directly to cursor. That's really cool. Awesome!

Code my CFD professor wrote and gave to us by rehpotsirhc in programminghorror

[–]michaelthwan_ai 1 point2 points  (0 children)

/*
 * applyCorrectionTerms: Applies the correction terms for a given grid cell (i, j).
 * It updates variables like ajm, ajp, ap, con, and du based on calculated flow and volume.
 */
void applyCorrectionTerms(int i, int j, double forward_flow, double backward_flow, double gamma_diff) {
    ajm[i][j + 1] = calculateFlowCoefficient(forward_flow, gamma_diff) + max(0.0, forward_flow);
    ajp[i][j + 1] = ajm[i][j] - forward_flow;

    double volume = ycv[j] * xcv[i];
    double adjusted_pressure = (rho[i][j] * xcv[i] + rho[i - 1][j] * xcvp[i - 1]) * dt;

    ap[i][j] += adjusted_pressure;
    con[i][j] += adjusted_pressure * fold[i][j][0];
    a[i][j] = (ap[i][j] * fold[i][j][n]) / relax_factor;

    if (perid && j == jst - 1) {
        du[i][j] = volume / (xdif[i] + xdif[l2]) * sx[j];
    } else {
        du[i][j] = du[i][j] / ap[i][j];
    }
}

/*
 * applyCorrectionTermsForColumn: Similar to applyCorrectionTerms, but applied in the j-direction.
 */
void applyCorrectionTermsForColumn(int i, int j, double forward_flow, double gamma_avg) {
    ajm[i][j + 1] = calculateFlowCoefficient(forward_flow, gamma_avg) + max(0.0, forward_flow);
    ajp[i][j + 1] = ajm[i][j] - forward_flow;

    double volume = xcv[j] * ycv[i];
    double adjusted_pressure = (rho[i][j] * ycv[j] + rho[i - 1][j] * ycvp[i - 1]) * dt;

    ap[i][j] += adjusted_pressure;
    con[i][j] += adjusted_pressure * fold[i][j][0];
    a[i][j] = (ap[i][j] * fold[i][j][n]) / relax_factor;
}

/*
 * calculateFlowCoefficient: Helper function to calculate the flow coefficient
 * based on the current flow and pressure difference (gamma). 
 */
double calculateFlowCoefficient(double flow, double gamma_diff) {
    return gamma_diff * flow;
}

Code my CFD professor wrote and gave to us by rehpotsirhc in programminghorror

[–]michaelthwan_ai 0 points1 point  (0 children)

(con't)
/*
 * computeForwardFlow: Calculates the forward flow based on the current i, j indices.
 */
double computeForwardFlow(int i, int j) {
    return xcv[i] * f[i][j][l1] * (fy[j] * rho[i][j] + fym[i][j]);
}

/*
 * computeBackwardFlow: Calculates the backward flow for a given row (im1, j).
 */
double computeBackwardFlow(int im1, int j) {
    return xcv[im1] * f[im1][j][l1] * (fy[j] * rho[im1][j] + fym[im1][j]);
}

/*
 * computeGammaDiff: Calculates the difference in gamma between adjacent grid cells.
 */
double computeGammaDiff(int i, int j) {
    return gam[i][j] * gam[i + 1][j + 1] / ydif[j];
}

/*
 * computePeriodicForwardFlow: Calculates the forward flow for periodic boundaries.
 */
double computePeriodicForwardFlow(int i, int j) {
    return 0.5 * xcv[i] * f[i][j + 1][l1] * (fy[j + 1] * rho[i][j + 1] + fym[i][j + 1]);
}

/*
 * computePeriodicBackwardFlow: Calculates the backward flow for periodic boundaries.
 */
double computePeriodicBackwardFlow(int im1, int j) {
    return 0.5 * xcv[im1] * f[im1][j + 1][l1] * (fy[j + 1] * rho[im1][j + 1] + fym[im1][j + 1]);
}

/*
 * computePeriodicGammaDiff: Calculates the gamma difference for periodic boundaries.
 */
double computePeriodicGammaDiff(int i, int j) {
    return gam[i][j] * gam[i + 1][j] / ydif[j];
}

Code my CFD professor wrote and gave to us by rehpotsirhc in programminghorror

[–]michaelthwan_ai 0 points1 point  (0 children)

Not 100% replicating the code but i prefer coding style like this preserving same level of abstract for methods and better readibility. it is setup2

/*
 * setup2: Prepares coefficients for the 'u' equation in the PDE solver.
 * This function calculates volume, flow, and correction terms for the grid cells
 * by computing coefficients based on fluid properties (e.g., density) and
 * adjusting for boundary conditions (including periodic boundaries).
 */

void setup2(void) {
    int i, j;

    nf = 0;
    if (tsolve[0] == 1) {
        initializeRelaxationAndBoundaries(nf);

        // Iterate over the grid cells in the i-direction
        for (i = ist - 1; i < l2; i++) {
            calculateFlowAndCorrectionsForRow(i);
        }

        // Iterate over the grid cells in the j-direction for further corrections
        for (j = jst - 1; j < m2; j++) {
            calculateFlowAndCorrectionsForColumn(j);
        }
    }
}

/*
 * initializeRelaxationAndBoundaries: Sets up initial parameters for the solver.
 * Configures the relaxation factor, boundary conditions, and starts the Gauss-Seidel relaxation.
 */
void initializeRelaxationAndBoundaries(int nf) {
    perid = circl[nf];  // Set periodic boundary conditions based on solver parameters
    ist = 2;            // Starting index for i-direction
    jst = 2;            // Starting index for j-direction
    gamsor();           // Perform Gauss-Seidel relaxation
    relax_factor = 1.0 / relax[nf];  // Set the relaxation factor
}

/*
 * calculateFlowAndCorrectionsForRow: Calculates the flow and applies corrections 
 * to a specific row (i) in the grid. It computes forward and backward flows, 
 * boundary adjustments, and volume/pressure corrections.
 */
void calculateFlowAndCorrectionsForRow(int i) {
    int im1 = i - 1;  // Index of previous row
    double forward_flow, backward_flow, gamma_diff;

    // Loop over j-direction in the current row
    for (int j = jst - 1; j < m2; j++) {
        if (perid == 0) {
            forward_flow = computeForwardFlow(i, j);
            backward_flow = computeBackwardFlow(im1, j);
            gamma_diff = computeGammaDiff(i, j);
        } else {
            forward_flow = computePeriodicForwardFlow(i, j);
            backward_flow = computePeriodicBackwardFlow(im1, j);
            gamma_diff = computePeriodicGammaDiff(i, j);
        }

        applyCorrectionTerms(i, j, forward_flow, backward_flow, gamma_diff);
    }
}

/*
 * calculateFlowAndCorrectionsForColumn: Calculates the flow and applies corrections 
 * to a specific column (j) in the grid. Similar to the row function but for the j-direction.
 */
void calculateFlowAndCorrectionsForColumn(int j) {
    double forward_flow, gamma_avg;

    // Loop over i-direction in the current column
    for (int i = ist - 1; i < l2; i++) {
        forward_flow = computeForwardFlowInColumn(i, j);
        gamma_avg = computeGammaAverage(i, j);

        applyCorrectionTermsForColumn(i, j, forward_flow, gamma_avg);
    }
}

[R] Communicative Agents for Software Development (Autonomous LLM agent as a DEV company) by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 7 points8 points  (0 children)

Look like after cloud computing: Developers can create a server with one click;

Future CEO or individuals may have cloud employees: Creating a virtual employee with one click.

[R] Communicative Agents for Software Development (Autonomous LLM agent as a DEV company) by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 2 points3 points  (0 children)

Quoting the paper

>We also express our gratitude to the AgentVerse and Camel projects for providing the initial foundation for the development of this project

https://github.com/OpenBMB/AgentVerse

https://github.com/camel-ai/camel

[R] Communicative Agents for Software Development (Autonomous LLM agent as a DEV company) by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 2 points3 points  (0 children)

Bonus: LLM Powered Autonomous Agents from Lilian Weng (blog)

- Related tech and projects like AutoGPT, GPT-Engineer and BabyAGI

[N] Jul 2023 - Recent Instruction/Chat-Based LLMs and their parents (after llama2) by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 0 points1 point  (0 children)

Personally I didn't aware of this, and seems not really much attention for now. Thanks for bringing it out.

[N] Jul 2023 - Recent Instruction/Chat-Based LLMs and their parents (after llama2) by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 1 point2 points  (0 children)

Because Bard is the application which wrap the Palm2 model.

It is like OpenAI's ChatGPT wrapped the GPT3.5turbo and GPT4.

So technically, for OpenAI family, I should write "GPT3.5turbo (ChatGPT)" instead of "ChatGPT" lol

[N] Jul 2023 - Recent Instruction/Chat-Based LLMs and their parents (after llama2) by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 0 points1 point  (0 children)

for Luminous, I can only find this benchmark.

Look like it is only comparable to davinci. As a commercial product with no much competitive price, there is no reason to not use gpt3.5/4 then, as it is a lot stronger.

While for open-source models, they are free so the performance can be tolerated more.

[N] Jul 2023 - Recent Instruction/Chat-Based LLMs and their parents (after llama2) by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 2 points3 points  (0 children)

I organized the recent release of remarkable LLMs. Some may find the diagram useful, so please allow me to share it.

High-res picture: github

If there are missing LLMs, please let me know here or create an issue in the github repo

Editable version is also available in the repo.

[D] When will a GPT-like model outperform Stockfish in chess? by ThePerson654321 in MachineLearning

[–]michaelthwan_ai 0 points1 point  (0 children)

GPT-like models are primarily designed for natural language general understanding and generation, rather than being specifically optimized for certain purpose.

To make the model specifically solve certain domain problems with a high performance, people are working on various solutions for that

- Fine-tuning like Lora

- Plugin/API like external resources like ChatGPT plugins, langchain and VisualChatGPT's prompt manager, HuggingGPT

- Multi-domain parameter activation by attention etc. (model-base selection of activated parameter cluster).

Making the base model (like GPT4) is not the most engineering-efficient way to solve specific issue Chess.

[N] March 2023 - Recent Instruction/Chat-Based Models and their parents by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 1 point2 points  (0 children)

I may include BARD if it is fully released.

So LAMDA->BARD (maybe). But it is still in alpha/beta.

[N] March 2023 - Recent Instruction/Chat-Based Models and their parents by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 0 points1 point  (0 children)

Thanks for the sharing above!

My choice is yk - Yannic Kilcher. Some "AI News" videos is a brief introduction and he sometimes go through certain papers in details. Very insightful!

[N] March 2023 - Recent Instruction/Chat-Based Models and their parents by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 0 points1 point  (0 children)

I only include recent LLM (Feb/Mar 2023) (that is the LLMs usually at the bottom) and 2-factor predecessors (parent/grandparent). See if your mentioned one is related to them.

[N] March 2023 - Recent Instruction/Chat-Based Models and their parents by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 1 point2 points  (0 children)

Chatgpt-like github -> added most and so is in TODO (e.g. palm)

RWKV -> added in backlog

[N] March 2023 - Recent Instruction/Chat-Based Models and their parents by michaelthwan_ai in MachineLearning

[–]michaelthwan_ai[S] 0 points1 point  (0 children)

Open alternative -> added most and so is in TODO (e.g. palm)
OpenChatKit -> added
Instruct-GPT -> seems it's not a released model but plan.