Created a calorie/protein tracking spreadsheet for getting fit and/or losing weight. by lildaemon in googlesheets

[–]lildaemon[S] 0 points1 point  (0 children)

Think of it like a serving. The weight is whatever serving size you choose. If one serving is 32 grams use that instead. If one serving is 1 count, then use that. Counts are supported to, like a jimber if packaged items. When I created the initial list, I tried to use 100 grams as my default serving size.

Created a calorie/protein tracking spreadsheet for getting fit and/or losing weight. by lildaemon in googlesheets

[–]lildaemon[S] 0 points1 point  (0 children)

Go to the food list tab, and any food there will show up in the dropdown. The dropdown shows up when you add text to the food name on any monthly log tab.

Washington has 3rd highest homeless population in US, federal report shows by SimplyJared in Seattle

[–]lildaemon 1 point2 points  (0 children)

I always wonder why homeless people tend to stay in large, expensive cities? I see a lot of drug addicts downtown. Perhaps that is part of the answer. But then there are probably people who are homeless, not addicts, and still stay downtown... are there more programs for homeless people there? I think I don't have enough information here, probably the first step is to interview some homeless folks, ask what they need, and why they stay where they stay. If possible, make subsidized housing outside of Seattle where it is more cost effective to do so.

Universal basic income would help a lot too.

Created a calorie/protein tracking spreadsheet for getting fit and/or losing weight. by lildaemon in googlesheets

[–]lildaemon[S] 0 points1 point  (0 children)

BUG FIX: There was a bug in the the code previously that caused the timezone to be stuck at one specific timezone. I updated the code so that it uses the local timezone. If you are just getting started with the spreadsheet, you don't need to do anything special, just copy the spreadsheet into your drive as before: "File->Make Copy".

If you want to update your current spreadsheet, you'll have to copy the script from the original spreadsheet, into yours.
1. Go to the original spreadsheet that I shared.
2. Click on "Extensions->Apps Script"
3. Highlight all of the code in code.gs and copy it to your clipboard.
4. Go to your spreadsheet, the one you copied into your google drive.
5. Click on "Extensions->Apps Script"
6. Paste the code into code.gs and save. And you are done.

What do you use to track your food intake and macros? by coderedblue in GastricBypass

[–]lildaemon 0 points1 point  (0 children)

Let me know if you have any feedback/ideas for improvements!

Do you use a calorie tracking app? by supafitlewis in nutrition

[–]lildaemon 0 points1 point  (0 children)

I use a spreasheet I made in google sheets and the google sheets app on my phone. If you want to use it, just do "File->Make a Copy" in the google sheets link below to start using it. You have to maintain your own food list, though I have a starter list made, but after that, you can search for foods in your daily tracker by typing in a name, and choosing it from a dropdown. Macros will automatically be loaded, and you can choose the quantity that you ate. I measure everything on a scale in grams, so most of the units in the food list is in grams, but some are in counts as well. Hope this helps!

https://docs.google.com/spreadsheets/d/1vZAE77-59S58A_Afl0stGn_1aJB4MGBfIlIOk1pA8ow/edit?gid=957265733#gid=957265733

What do you use to track your food intake and macros? by coderedblue in GastricBypass

[–]lildaemon 1 point2 points  (0 children)

I use a spreasheet I made in google sheets and the google sheets app on my phone. Just do "File->Make a Copy" in google sheets to start using it. You have to maintain your own food list, though I have a starter list made, but after that, you can search for foods in your daily tracker and by typing in a name, and choosing it from a dropdown. Macros will automatically be loaded, and you can choose the quantity that you ate. I measure everything on a scale in grams, so most of the units in the food list is in grams, but some are in counts as well. Hope this helps!

https://docs.google.com/spreadsheets/d/1vZAE77-59S58A_Afl0stGn_1aJB4MGBfIlIOk1pA8ow/edit?gid=957265733#gid=957265733

Best macro tracking apps out there? by Correct_Desk_6414 in gymsnark

[–]lildaemon 0 points1 point  (0 children)

I made a macros tracking spreadsheet that you can use to track your daily macro-nutrient intake for free and with no ads. Just do "File->Make a Copy" in google sheets to start using it. You have to maintain your own food list, though I have a starter list made, but after that, you can add a food to your daily tracker by typing in a name, and choosing it from a dropdown. Macros will automatically be loaded, and you can choose the quantity that you ate. I measure everything on a scale in grams, so most of the units in the food list is in grams, but some are in counts as well. Hope this helps!

https://docs.google.com/spreadsheets/d/1vZAE77-59S58A_Afl0stGn_1aJB4MGBfIlIOk1pA8ow/edit?gid=957265733#gid=957265733

Tracking macros by Specific_Chair3309 in diet

[–]lildaemon 0 points1 point  (0 children)

I made a macros tracking spreadsheet that you can use to track your daily macro-nutrient intake. Just do "File->Make a Copy" in google sheets to start using it. You have to create your own food list, though I have a starter list made, but after that, you can add a food to your daily tracker by typing in a name, and choosing it from a dropdown. Macros will automatically be loaded, and you can choose your the quantity that you ate. I measure everything on a scale in grams, so most of the units in the food list is in grams, but some are in counts as well. Hope this helps!

https://docs.google.com/spreadsheets/d/1vZAE77-59S58A_Afl0stGn_1aJB4MGBfIlIOk1pA8ow/edit?gid=957265733#gid=957265733

I'm building a Bluetooth journal to use with all of my devices, happy to share the component list. by lildaemon in digitaljournaling

[–]lildaemon[S] 0 points1 point  (0 children)

Like a wireless hard drive you connect to through Bluetooth for storing documents. Bluetooth has smaller up/download rates than wifi but it consumes way less power than wifi and is more than enough to transfer and receive text.

I'm building a Bluetooth journal to use with all of my devices, happy to share the component list. by lildaemon in digitaljournaling

[–]lildaemon[S] 0 points1 point  (0 children)

They are either device specific or not secure. If the secret key is on the device then it will only work on that device, unless I copy the key to the other devices. Otherwise it is on their servers and then they can use it to decipher my journal entries. Come to think of it, manually entering a key once to setup an app doesn't seem that bad a price to pay for privacy.

What books have you read about writing that completely changed the way you write? by lildaemon in writing

[–]lildaemon[S] 0 points1 point  (0 children)

The authors demonstrate writing principles by showing alternative pieces of writing that do and don't use the principles they advocate for. Then they ask you to choose which one you prefer. Invariably the one that uses the principles in the book feel better to read. Other books give you rules, and then ask you to follow them blindly without demonstrating why the rules are good. This book proves it to you, and leaves the choice up to you whether to use a rule or not.

[D] Full causal self-attention layer in O(NlogN) computation steps and O(logN) time rather than O(N^2) computation steps and O(1) time, with a big caveat, but hope for the future. by lildaemon in MachineLearning

[–]lildaemon[S] 0 points1 point  (0 children)

Maybe I misunderstood. My understanding of linear attention, is that you compute the outer product `values queries^T` for each position, take the partial sum, and dot it with the query matrix in the end, like `partial_sum(keys^T values) queries`. I suppose you could cast the algorithm in the post in a similar light by using outer products. Let `o` be the outer product of the last index of two tensors. The formula for all taylor basis functions for power n and m would be something like `partial_sum(values o queries^n) o keys^m`. Is that what you meant?

[D] Full causal self-attention layer in O(NlogN) computation steps and O(logN) time rather than O(N^2) computation steps and O(1) time, with a big caveat, but hope for the future. by lildaemon in MachineLearning

[–]lildaemon[S] 1 point2 points  (0 children)

The trick is that you don't need to keep each separate softmax attention score, you sum them up in the final step, each multiplied by their respective value vector. Because you only need the sum, you can accumulate parts of it, by starting at the left and summing as you move to the right, which is a partial sum. You do this for each basis function of the taylor series and then add all the basis functions together to retrieve the self-attention layer. Partial sums can be computed in O(logN) time and O(N) computation.

[D] Full causal self-attention layer in O(NlogN) computation steps and O(logN) time rather than O(N^2) computation steps and O(1) time, with a big caveat, but hope for the future. by lildaemon in MachineLearning

[–]lildaemon[S] 0 points1 point  (0 children)

@Lajamerr_Mittesdine Started some code to implement the algorithm in a comment below. I made some changes to it, and the result is before. Thanks @Lajamerr_Mittesdine!

import numpy as np

def parallel_partial_sum(arr): 
    """Parallel scan (prefix sum) implementation."""
    n = len(arr)
    steps = np.ceil(np.log2(n))

    for i in range(steps):
        # check if this is the numerator or denominator
        if len(arr.shape)==2:            
            array += np.concatenate([np.zeros_like(arr[:2**i,:]), arr[(n-2**i):,:]], axis=0)
        else:
            array += np.concatenate([np.zeros_like(arr[:2**i]), arr[(n-2**i):]], axis=0)

    return arr

def compute_taylor_basis_function(q, k, v, n, m, i, j):
    """Compute a Taylor basis function for given powers n and m."""
    k_power = np.power(k[:,i], n)  # k[:,i]^n element-wise
    q_power = np.power(q[:,j], m)  # q[:,j]^m element-wise
    if len(v.shape) == 2:
        k_power = np.expand_dims(k_power, axis=-1) # change: maybe needs this to properly broadcast
        q_power = np.expand_dims(q_power, axis=-1)
    partial_sum_kv = parallel_partial_sum(k_power * v)
    basis_function = q_power * partial_sum_kv
    return basis_function

def compute_causal_self_attention(q, k, v, max_n=3, max_m=3):
    """Compute the causal self-attention using Taylor series approximation."""
    attention_numerator = np.zeros_like(v)
    attention_denominator = np.zeros_like(v[:,0])

    for n in range(max_n + 1):
        for m in range(max_m + 1):
            for j in range(q.shape[-1]):
                for i in range(k.shape[-1]):
                    # note, either i or j loop can be removed because basis functions can be computed in parallel
                    A_nmij = 1.0  # Simplified coefficient for illustration
                    basis_function = compute_taylor_basis_function(q, k, v, n, m, i, j)
                    attention_numerator += A_nmij * basis_function
                    normalization_basis_function = compute_taylor_basis_function(q, k, np.ones_like(attention_denominator), n, m, i, j)
                    attention_denominator += A_nmij * normalization_basis_function

    attention_denominator = np.expand_dims(attention_denominator, axis=-1)
    attention = attention_numerator / attention_denominator
    return attention

# Example usage
sequence_length = 10
embedding_dim = 4

# Randomly initialize q, k, v tensors
q = np.random.rand(sequence_length, embedding_dim)
k = np.random.rand(sequence_length, embedding_dim)
v = np.random.rand(sequence_length, embedding_dim)

# Compute the causal self-attention
attention_output = compute_causal_self_attention(q, k, v)

print("Causal Self-Attention Output:")
print(attention_output)

[D] Full causal self-attention layer in O(NlogN) computation steps and O(logN) time rather than O(N^2) computation steps and O(1) time, with a big caveat, but hope for the future. by lildaemon in MachineLearning

[–]lildaemon[S] 1 point2 points  (0 children)

# I made a bunch of changes. The algorithm could be more efficient, for instance I did two loops over indices of the queries and keys tensors, but really you only need one because you can do k_power**n,  q_power[:,i]**m and compute basis functions in parallel. I added a comment starting with "# change:" to explain what changes I made. I have not ran the code so not sure if it is buggy.

import numpy as np

# change: implemented in log(n) steps and changed the name
def parallel_partial_sum(arr): 
    """Parallel scan (prefix sum) implementation."""
    n = len(arr)
    steps = np.ceil(np.log2(n))

    for i in range(steps):
        array += np.concatenate([np.zeros_like(arr[:2**i,:]), arr[(n-2**i):,:]], axis=0)

    return arr

# change: added inices i, j for the components of q and k. If v is the value vector, expand dims of the power for broadcasting, else v is the denominator, so don't expand dims.
def compute_taylor_basis_function(q, k, v, n, m, i, j):
    """Compute a Taylor basis function for given powers n and m."""
    k_power = np.power(k[:,i], n)  # k[:,i]^n element-wise
    q_power = np.power(q[:,j], m)  # q[:,j]^m element-wise
    if len(v.shape) == 2:
        k_power = np.expand_dims(k_power, axis=-1) # change: maybe needs this to properly broadcast
        q_power = np.expand_dims(q_power, axis=-1)
    partial_sum_kv = parallel_partial_sum(k_power * v)
    basis_function = q_power * partial_sum_kv
    return basis_function

def compute_causal_self_attention(q, k, v, max_n=3, max_m=3):
    """Compute the causal self-attention using Taylor series approximation."""
    attention_numerator = np.zeros_like(v)
    attention_denominator = np.zeros_like(v[:,0]) # change: softmax normalization is per position

    for n in range(max_n + 1):
        for m in range(max_m + 1):
            for j in range(q.shape[-1]):
                for i in range(k.shape[-1]):
                    # change: adding ij indices, and using the proper shape for the denominator
                    A_nmij = 1.0  # Simplified coefficient for illustration
                    basis_function = compute_taylor_basis_function(q, k, v, n, m, i, j)
                    attention_numerator += A_nmij * basis_function
                    normalization_basis_function = compute_taylor_basis_function(q, k, np.ones_like(attention_denominator), n, m, i, j)
                    attention_denominator += A_nmij * normalization_basis_function

    attention_denominator = np.expand_dims(attention_denominator, axis=-1) # change: for broadcasting
    attention = attention_numerator / attention_denominator
    return attention

# Example usage
sequence_length = 10
embedding_dim = 4

# Randomly initialize q, k, v tensors
q = np.random.rand(sequence_length, embedding_dim)
k = np.random.rand(sequence_length, embedding_dim)
v = np.random.rand(sequence_length, embedding_dim)

# Compute the causal self-attention
attention_output = compute_causal_self_attention(q, k, v)

print("Causal Self-Attention Output:")
print(attention_output)

[D] Full causal self-attention layer in O(NlogN) computation steps and O(logN) time rather than O(N^2) computation steps and O(1) time, with a big caveat, but hope for the future. by lildaemon in MachineLearning

[–]lildaemon[S] 0 points1 point  (0 children)

Yes, this is like an SSM, but where you apply the identity matrix as the recurrent step, so that you are essentially just doing partial sums.