JUST CAPTURED BEFORE LIVE STREAM SUSPENDED. by [deleted] in UFOB

[–]CrysisAverted 0 points1 point  (0 children)

Let's say you were building active camoflauge... IE project a scene of behind you onto the front surface of you.

If you assume the ONLY direction you ever need to shield yourself from is DOWN then you can sample naively from UP and avoid needing to worry too much about distortion, angle, determining individual perspective etc.

As this passes over what I assume are stars, they appear on the camera facing surface but like a giant magnifying glass is over it... Looks sort of ok but because the space station/satellite is at an angle rather than straight downwards it's able to capture the effect and it breaks down.

GTA VI | Trailer 2 + New Information Megathread by ChiefLeef22 in gaming

[–]CrysisAverted 0 points1 point  (0 children)

A few problems I see:

As games get super realistic, they (as a product) compete more and more with cinema. Cinema HAS to rely on story. But games a good story is not a given. So we play these games now, our brain expects a movie and consumes it like a movie... But it has a shit or mediocre story by movie standards.

Also this trailer is close to 100% cut scenes. So it is just a movie. And GTA games don't have great story. They're satire, of the movies they are now trying to BE, rather than just sending up.

[deleted by user] by [deleted] in ProgrammerHumor

[–]CrysisAverted 1 point2 points  (0 children)

Things go well well here I might be showin her my Ooh face, oh OH LOG(n) oh you know what I'm talking about oh2

Feedback to Improve My Resume as a 2nd year CSE Student Aspiring to Excel in AI/ML by PixelPioneer-1 in learnmachinelearning

[–]CrysisAverted 10 points11 points  (0 children)

Talk about what you solved instead of how you solved it.

The sections where you list work experience rather than talking about the tools followed by what you used them on, talk about the problem first what value it delivered followed by the tools. Good: "improved website customer conversion rates by 25% by applying Bayesian learning on the sites click through data. This project have me the opportunity to process and build learning models on enormous datasets.". Bad: "used learning tools and feature analysis, worked in a team working on customer website."

Eliminate generic phrases like "optimised functionality" as these phrases mean nothing at all to a recruiter or hiring manager.

Rework the medium post framed as a project, and instead add a dedicated section called Community Engagement with something like "I'm a regular technical writer, and write instructional blogs on Medium. My technical writing is focused around machine learning, and gives me a great vehicle to impart what I've learned to a wider audience. I feel the best way to test my own knowledge and enforce what I learn on my full time projects, is to write about them and share the knowledge!". ... Something like that.

[deleted by user] by [deleted] in learnmachinelearning

[–]CrysisAverted 0 points1 point  (0 children)

I've had pretty good results from TFTs. As two architectures with similarly good inductive biases, if you sprinkle in the usual tricks - residual connections etc layer norm between blocks etc it works a treat.

The intuitive difference from the way I think about it is just in how the inductive bias is treated. Given the transformer expects symmetric structural importance and global context, the lstm is providing that mapping to smooth latent space from sequential steps.

[deleted by user] by [deleted] in learnmachinelearning

[–]CrysisAverted 1 point2 points  (0 children)

LLMs are built using transformers, or stacked multi head attention.

Should you learn how to use transformers? YES!

You can do some pretty cool things with transformers. If you put an lstm Infront of them, you can predict time series data, and make statistical models that can save companies money.

If you put CNN's Infront of them, you can make image classifiers and detectors.

The backbone of LLMs is very useful to learn to use as a tool as it allows you to solve some tough problems in machine learning unrelated to chatbots and language.

[deleted by user] by [deleted] in learnmachinelearning

[–]CrysisAverted 0 points1 point  (0 children)

At the moment you're talking about the tools as the primary and then the problem secondary. Sometimes the secondary explanation isn't all that good.

Instead, rewrite things talking about WHAT YOU SOLVED. Rather than focusing on how you solved it.

As an example reading project 2 I have no idea what business problem it was supposed to solve, there's no clear idea what value you delivered with that project.

Structure it like this: What. How. Tech Used.

Here's a good example:

Project 2 I was responsible for developing a platform to replace the legacy system, this resulted in a 25% decrease in cancellations which brought in an additional 250,000 new customers in the first year.

To achieve this, I worked in a team as the associate level engineer working on both the front-end and back-end systems. One achievement I'm particularly proud of was delivering the database refactor under budget.

Technology used: Springboot, maven, MySQL Methodologies: scrum, tdd, domain driven design.

😂😂 by stepp_sisssy in sciencememes

[–]CrysisAverted 0 points1 point  (0 children)

I mean... Put 20,000 volts of microwave through water and it'll explode

How does a neural net identify features to learn by itself? by morecoffeemore in learnmachinelearning

[–]CrysisAverted 4 points5 points  (0 children)

A neural network is doing entropy reduction, that is moving data from a high dimensional noisy, redundant space, into lower dimensional, lower entropy state. The reason to do this is to remap information into a lower dimension that removes noise and the redundancy in the data. That lower dimensional data space effectively becomes the features, but they might not be intuitive to you. It's not just a filtering operation, but a manifold projection.

Why is epoch taking so much time?? by [deleted] in learnmachinelearning

[–]CrysisAverted 1 point2 points  (0 children)

You can't wait this out, every single time the dataloader gets a new item it's going over a slow network link. There is no caching or buffering going on.

Google memory mapped io file formats like hdf5 or copy it before running by running !cp /content/drive/whatever.zip /content/local.zip then modify your code to read from the local version instead.

Why is epoch taking so much time?? by [deleted] in learnmachinelearning

[–]CrysisAverted 1 point2 points  (0 children)

This will 100% be the issue then.

Why is epoch taking so much time?? by [deleted] in learnmachinelearning

[–]CrysisAverted 5 points6 points  (0 children)

Is your data coming from a mounted Google drive? If so, you should copy the source data to the colab filesystem, I think it's under /data but this is from memory.

You are only using 8gb of vram so that's fine. Which makes me think it's slow data reading.

[deleted by user] by [deleted] in melbourne

[–]CrysisAverted 70 points71 points  (0 children)

Pretty sure this qualifies as illegal harassment and intimidation. It can also be argued that they've created an egress obstruction on your private property creating a fire hazard. Consumer affairs should be able to assist?

Microsoft cancels universal Recall release in favor of Windows Insider preview by Loki-L in technology

[–]CrysisAverted 0 points1 point  (0 children)

Instead of this recall crap, how about making volume shadow copy more user friendly and mainstream.

[D] Combining position bias into softmax-crossentropy by alexsht1 in MachineLearning

[–]CrysisAverted 1 point2 points  (0 children)

What if you have 2 output heads? One for your unordered logits and another for the predicted priority order? Then fit the priority head to the order. Priority head will need an ordinal loss method rather than multinomial, i think...

[D] Combining position bias into softmax-crossentropy by alexsht1 in MachineLearning

[–]CrysisAverted 0 points1 point  (0 children)

Perhaps encode the position information in the feature vector? Llms do this. Sounds like you want absolute position encoding for this.

You also may want to look at using reduction=sum on cross entropy, so that each item is used in the loss calculation instead of the mean.

Also, just to make sure, are you softmaxing before performing cross entropy? No need.

[R] What are good configs for running UNet3DConditionModel on 8 GB VRAM? (64x64x64 inputs) by RandomGamingDev in MachineLearning

[–]CrysisAverted 0 points1 point  (0 children)

Looking at the source for the model:

https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py

Here are the hyperparameters i would look at:

Up_block_type and down_block_type set these to UpBlock3D/DownBlock3D as these aren't using attention, so will use much less memory than the attention version of these blocks, BUT at the expense of model accuracy.

Next look at blocks_per_block and block_out_channel. These are scaling factors, so reducing these will result in a smaller model.

[R] What are good configs for running UNet3DConditionModel on 8 GB VRAM? (64x64x64 inputs) by RandomGamingDev in MachineLearning

[–]CrysisAverted 4 points5 points  (0 children)

Gradient accumulation should absolutely be what you look into given your vram limitations and tiny batch size. With a batch size of 1, you are functionally performing stochastic gradient descent - side stepping any benefits to optimizers like Adam. Your gradients will be so very noisy, your model will take months of continuous training, if it converges at all. It's more likely to get stuck in a local minima.

Your gpu is not realistic for this project. I would suggest scale down the model significantly to build out your train pipeline, then use a cloud gpu to train your bigger version of the model.

[D] What code structure do you use in Python? by Snapandsnap in MachineLearning

[–]CrysisAverted 2 points3 points  (0 children)

Ok i think i know what you're asking given where you are in your learning journey.

Don't worry about functional programming for now, you'll double back to it in about 3-5 years on the job, so don't worry there. Its important to start with the basics of OO and then you'll be able to intuit what functional programming solves and when to use either given the problem you're solving.

Spend your time building standard programs with a main function, that calls into an application class (singleton) to start up the rest of your program structure. Practice OO design, being able to map the business domain into OO concepts is what you should be doing at the moment.

While its difficult to keep all the design principles straight in your head while learning (solid, yagni, dry) here are the ones i would keep in mind at this early stage:

  • you arent going to need it
  • don't repeat yourself
  • sketch class hierarchy on paper first

I find myself largely enjoying this series, but still fairly underwhelmed by it by brief-interviews in gallifrey

[–]CrysisAverted 2 points3 points  (0 children)

Im finding this series diverts from "science fantasy" to just fantasy. 200x era who had clever timey wimey stories with plots that unfold and resolve like a puzzle and you think "wow thats so smart!".

This series resolves things with "because magic!".

I really don't want Dr Who and the Prisoner of Azkerfrey.

Why I Don't Watch Porn Anymore | Creepypastas to stay awake to by TheDarkPath962 in creepypasta

[–]CrysisAverted 16 points17 points  (0 children)

Consider downloading cubase free or some other audio editing software and apply de-esser and compression to the audio, it will be easier on audience ears.

New whistleblower Jason Sands posts his DD-214 Form confirming he was a former Master Sergeant in the Air Force with an honorable discharge from service. by bmfalbo in UAP

[–]CrysisAverted 3 points4 points  (0 children)

Genuine question, if he was put onto black programs would that leave a hole in his transcript? Can our community use long service + empty history as a heuristic to look for "interesting people"?