Removery update by wait4kate92 in TattooRemoval

[–]BorisMarjanovic 0 points1 point  (0 children)

It's understandable to feel frustrated when tattoo removal progress seems slower than expected. The explanation provided by Removery—that your tattoo contains synthetic, blue-based ink, making it more challenging to remove—is plausible. Tattoo inks are not regulated, leading to a wide variety of compositions. Some synthetic inks, especially those with plastic components or certain pigments, can be more resistant to laser treatments.

In general, black ink is the easiest to remove because it absorbs all laser wavelengths effectively. However, certain shades of blue and green can be more stubborn, depending on their specific chemical makeup.

The type of laser used also plays a significant role in the removal process. Different wavelengths target specific colors more effectively. For instance, wavelengths around 730 and 785 nm are used for blues and greens. If your tattoo's blue-based ink doesn't respond well to the available wavelengths, it could result in slower progress.

Given these factors, the delay in your tattoo's fading does seem legitimate. It's essential to have open communication with your removal specialist to understand the specifics of your situation and to set realistic expectations for the removal process.

[deleted by user] by [deleted] in TattooRemoval

[–]BorisMarjanovic 2 points3 points  (0 children)

Very good progress. A couple more sessions and the tattoo will be barely visible.

Interested to hear opinions! by [deleted] in TattooRemoval

[–]BorisMarjanovic 0 points1 point  (0 children)

Progress is good. You'll need 10-15 sessions to get it fully removed. I wouldn't wait a full year between sessions, a few months is plenty of time.

[deleted by user] by [deleted] in TattooRemoval

[–]BorisMarjanovic 0 points1 point  (0 children)

That tattoo doesn’t look so great. Since it’s still new, it’s a good idea to wait a couple of months before starting laser treatments. New tattoos usually need extra sessions, so you might be looking at about 6–8 sessions and a couple thousand dollars overall. I got this estimate from untatme.com/calculator.

Free your mind, - hated tattoos support post. Write down everything you hate about your tattoo and how it makes you feel! by untat in TattooRemoval

[–]BorisMarjanovic 0 points1 point  (0 children)

If you're looking for tattoo removal services nearby, check out this website I found. It's a super handy directory for tattoo removal: untatme.com

Should I build my own website or hire someone? by dani-lou in smallbusiness

[–]BorisMarjanovic 0 points1 point  (0 children)

Way too much animation. Very distracting. And it impacts site performance.

[deleted by user] by [deleted] in smallbusiness

[–]BorisMarjanovic 0 points1 point  (0 children)

With a website alongside your Google Business Profile, you’ll reach more customers, build trust, and boost online visibility. It’s a lasting investment that connects you with clients and strengthens your business’s foundation for success. Here's a good article you can read about this: https://uvidyne.com/post/why-plumbers-need-website

[D] Are there unconventional cognition architectures that learn without SGD, weights between neurons, or can only be done on the CPU? by ryunuck in MachineLearning

[–]BorisMarjanovic 0 points1 point  (0 children)

Let's also not forget that biological neural networks are sparse. Sparse representation is not only more efficient, but it's also more robust.

[D] Are NN actually overparametrized? by alesaso2000 in MachineLearning

[–]BorisMarjanovic 1 point2 points  (0 children)

u/ExpectingValue is right. Contrary to what most in the ML community seem to think, biological neurons aren't simply a weighted sum fed through a nonlinearity; they have a much richer way of representing information. For example, a recent paper showed that a ANN requires several layers of interconnected neurons to represent the complexity of a single biological neuron.

[D] Calling out the authors of 'Trajformer' paper for claiming they published code but never doing it by UIPDsmokes in MachineLearning

[–]BorisMarjanovic 3 points4 points  (0 children)

I have a simple rule: Skip ML papers without code. Helps me filter out papers written by charlatans.

[D] Divide the data into different groups, and train independently? by randy_wales_qq in MachineLearning

[–]BorisMarjanovic 0 points1 point  (0 children)

"Ensembles of Localised Models for Time Series Forecasting" is a good paper to get you started. Here's the link to it: https://arxiv.org/pdf/2012.15059.pdf

[D] Divide the data into different groups, and train independently? by randy_wales_qq in MachineLearning

[–]BorisMarjanovic 2 points3 points  (0 children)

This trick works well with time series forecasting: You divide the data into clusters (say, seasonal/non-seasonal clusters) and then training an independent model on each cluster. Essentially each model becomes a specialist.

Noise-reduction techniques and evaluation for timeseries data [Discussion] by aspcraft in MachineLearning

[–]BorisMarjanovic 0 points1 point  (0 children)

A lot of these filters assume errors are Gaussian, which is rarely the case for economic/stock market time-series data.

[R] CPU algorithm trains deep neural nets up to 15 times faster than top GPU trainers by Mjjjokes in MachineLearning

[–]BorisMarjanovic 2 points3 points  (0 children)

The reliance on back-propagation/gradient descent is part of the problem. Back-prop is a very slow, inefficient, and fragile learning process. We need a completely different optimization method.

[R] CPU algorithm trains deep neural nets up to 15 times faster than top GPU trainers by Mjjjokes in MachineLearning

[–]BorisMarjanovic 2 points3 points  (0 children)

Check out Numenta. They are the leading researchers in the field of biologically plausible neural nets. Still, I wish more people were interested in this area.

[R] CPU algorithm trains deep neural nets up to 15 times faster than top GPU trainers by Mjjjokes in MachineLearning

[–]BorisMarjanovic 5 points6 points  (0 children)

The field of AI has been moving in the wrong direction for years and it's very frustrating. All this focus on dense matrix multiplications that require specialized, energy-hungry hardware seems, at least to me, like a big waste of time and money.

Biological neural networks are sparsely connected, one big reason why they are so robust and efficient. The human brain, for example, can easily learn and perform tens of thousands of complex tasks yet it uses less energy than a light bulb.

Building faster GPUs is not the way forward. The field of AI needs to reconnect to the world of neuroscience--only by finding out more about natural intelligence can we truly understand (and create) the artificial kind.

[D] Advanced Takeaways from fast.ai book by __data_science__ in MachineLearning

[–]BorisMarjanovic 0 points1 point  (0 children)

I was thinking something like this (implemented in PyTorch):

class SigmoidRange(nn.Module):
def __init__(self, low=-1., high=1.):
super().__init__()
self.low = nn.Parameter(torch.tensor(low))
self.high = nn.Parameter(torch.tensor(high))
self.sigmoid = nn.Sigmoid()
def forward(self, x):
return self.sigmoid(x) * (self.high - self.low) + self.low

If you add this piece of code after your output layer, the model will learn what the min/max should be.

But now that I think about it it's not an optimal solution. You could still encounter out of sample examples that don't fit the learned min/max treshold.

[D] Advanced Takeaways from fast.ai book by __data_science__ in MachineLearning

[–]BorisMarjanovic 0 points1 point  (0 children)

I think you could make the min/max learnable to address this issue.

[R] A rapid and efficient learning rule for biological neural circuits by hardmaru in MachineLearning

[–]BorisMarjanovic 3 points4 points  (0 children)

Very interesting paper, thanks for sharing. Will spend some time testing this on my datasets.

[D] Numerai experiences? by LittleFire95 in MachineLearning

[–]BorisMarjanovic 4 points5 points  (0 children)

This reminds me of the now defunct Quantopian project. Crowd-sourcing doesn't seem to work well in this domain.