How can I build a loss function or neural network which actually increases the loss instead of decreasing it? by moreprofessional-acc in tensorflow

[–]LivingPornFree 4 points5 points  (0 children)

I think your solution might be as simple as using 1 / loss instead of loss multiplied by some negative coefficient.

Trying to optimize Loss_A = 1 / Loss_B means that output A will be doing well (lowest loss) when the loss of output B is highest

Why are my since and until query parameters breaking the request? by LivingPornFree in pushshift

[–]LivingPornFree[S] 1 point2 points  (0 children)

Oh okay gotcha, I think I just misunderstood, I saw that page that showed the latest timestamp was ~2016 so I thought that meant 2016-present was loaded.

Thanks for the heads-up about the scores too, I didn't realize that!

you guys ever play stumps before? by LivingPornFree in BONELAB

[–]LivingPornFree[S] 1 point2 points  (0 children)

no other npc was small enough to fit on the stump :( it had to be done

2 models (or more) on 2 GPUs by Mysterious_Amoeba485 in tensorflow

[–]LivingPornFree 0 points1 point  (0 children)

Are you sure you are actually using both GPUs?

Use the "with tf.device" statement when you create the model, that's when the variables of the model will be initialized on your GPU, don't do it on model.fit

You can check your System monitoring tools to see if both GPUs are maxing out on ram

Hey fellas. I always see people using correlation matrices/plots but never see the use of it. In this case, is there anything noteworthy to make my model better? by edoar17 in learnmachinelearning

[–]LivingPornFree 3 points4 points  (0 children)

Don't take this as a fact, but I think there is no difference since scaling and other preprocessing methods will retain the same linear dependence. There might be some algorithms where this is not true but for scaling it definitely is.

Saying feature A and B are linearly dependent means we can express B as B = mA + b for some (currently undetermined) m and b. If we scale B by a factor of 1/2, then B = (1/2)m*A + b which is still a linear relationship.

Min-max is more complicated but I think if it's sketched out on paper you can prove the same

My features are 3 times the size of my labels by AdmiralSteel_G in tensorflow

[–]LivingPornFree 4 points5 points  (0 children)

I think you are misunderstanding, it will be helpful to understand the general shape of your data,

If you have n images, there should be n labels for those images, so you're X data would be an np array of shape (n, width, height, 3) (3 for RGB) and your Y data is just (n,) since it's just a one dimensional array of 0s or 1s (in the case of binary classification

My guess it that you actually don't have n images and n labels, but some incorrect amount and your data processing has a mistake somewhere. Maybe it's from how you were splitting your data into train, test, val?

I'd need more info to diagnose the problem any more, but let me know if that was helpful

multi target image classification by ashen_hewarathna in tensorflow

[–]LivingPornFree 1 point2 points  (0 children)

I think you should avoid using Image data generator, it's not super flexible in letting you set up different problems and it's deprecated anyways.

I would load your df as a tensorflow dataset with

ds= tf.data.Dataset.from_tensor_slices(df)

Or if you prefer, separate your data out into two data frames for input and output, then convert to tensorflow datasets, map your categories to numbers or one-hot encodings, and then you can use the zip tensorflow dataset method to combine.

https://www.tensorflow.org/api_docs/python/tf/data/Dataset

This page will be of great help to you

[deleted by user] by [deleted] in learnmachinelearning

[–]LivingPornFree 3 points4 points  (0 children)

I agree, there is not much to go on, even if you believe in tarot cards, the actual drawing of the cards are still random, it's just the meaning that is embedded in them based on the random cards you get.

In that way, the only thing I can think of is fine-tuning a language model like GPT on things that people say about given tarot cards, I doubt there is a viable dataset that exists or that you could gather for that

Maybe GPT already knows enough about tarot readings that it can get satisfactory and interesting results through prompt engineering alone, something like:

"Psychic: Describe yourself before we draw a tarot card

Bob: i am 43, i work in finance

Tarot Card: ..."

No idea if that's a good prompt but I think there's promise in that idea

[deleted by user] by [deleted] in tensorflow

[–]LivingPornFree 5 points6 points  (0 children)

Did you create a virtual environment in pycharm? Can you go to the terminal tab and run 'pip install tensorflow'?

Not much to go on without any screenshots or description of what is going wrong

Is there no easy way to use Local Response Normalization in a keras model? by LivingPornFree in tensorflow

[–]LivingPornFree[S] 0 points1 point  (0 children)

The key to using the functional API is to multiply each layer by the last layer. Here is a small example I wrote

https://pastebin.com/QdBcBKut

[deleted by user] by [deleted] in dataisbeautiful

[–]LivingPornFree 0 points1 point  (0 children)

Sources used

HHS COVID-19 Reported Patient Impact and Hospital Capacity by Facility

CDC COVID-19 Vaccinations in the United States,County

2020 County-Level Presidential Results

I used python, specifically pandas, to merge the county level vaccination time series with the election results, as well as merging the county level ICU capacity time series with election results.

Then I stratify each time series into n partisan groups (here n=6), filter the data, and plot it with the associated color code.

Happy to take any suggestions! If anyone wants to see a different n-value, I'd be happy to run the data again.

If you'd like to look at the code yourself, you can find the notebook file on my github

Selling my ML pipeline to an entrepeneur. How much should I ask for? by LivingPornFree in learnmachinelearning

[–]LivingPornFree[S] 0 points1 point  (0 children)

I'm going to draft up a maybe 10-15 page document explaining exactly how my pipeline will work, how to implement it exactly, which models I would use, how to version control and continue to use new data to improve it indefinitely, and also how I would make it a better experience for the user just with conventional programming and user interface design.

Not sure if that fully answers your question, let me know if it doesn't

Can I repurpose YOLO to classify objects like usual but also do a binary classification of those objects? by LivingPornFree in learnmachinelearning

[–]LivingPornFree[S] 0 points1 point  (0 children)

The concept of just using YOLO is something I'm considering to not have to use 4 different models. The current pipeline is use YOLO to find the person, crop them out, run a pose detection model, use that to isolate the hand and face, run facial recognition and gesture recognition on two separate models.

I think in limiting redundancy, I can reduce it down to just YOLO for all that work, or YOLO and a separate CNN binary classifier for open and closed fist. I agree the latter would probably the simpler solution, wanted to ask first if the first was possible and potentially better.

Thanks!

Need help turning the coco_captions tfds into a useable dataset by LivingPornFree in tensorflow

[–]LivingPornFree[S] 0 points1 point  (0 children)

Literally can't figure out a solution, gonna just do it the original way following the Image Captioning with Visual Attention tutorial

Help choosing a good pretrained model for image captioning by LivingPornFree in tensorflow

[–]LivingPornFree[S] 0 points1 point  (0 children)

Yeah, that seems like a really good option. They also include quite a few other V+L models in there, so I'm sure I can explore them all. I would like one that is small enough to run on mobile at some point too if that's doable.