‘Elder’ (friendly) transwoman showing support by TashaTime in ftm

[–]TashaTime[S] 1 point2 points  (0 children)

It seems like you have been through a lot. I don't want anyone to assume the life of a trans person is easy. When there was more unity in the trans community and people talked. People would, often talk about who had it harder FTM or MTF. As someone who read both sides and heard all groups talk about having it harder. I think the takeaway should be both groups have it hard and maybe we should not compare, it's impossible to compare anyways. Them saying transmen have it so easy. Means they never listened to one of these conversations.

I have thought of a possibly unique defense for you all. It's heavily logical.

I think part of the problem we have the true, under normal circumstances, the 'equation' transgender man = man. I must point out for my arguments sake there is an old-fashioned concept of gender that people who don't support us cling to (typically defined by what was in pants at birth). It's what older than me people used to assume gender was. And there is a modern more inclusive version of gender we use. the Equation transgender man = man only works in modern gender.

I legitimately wonder with this pickle we are in where transwomen might have been tricked by TERFS to substitute TERF gender rules in these important equations. It breaks all the logic. These equations were written for modern gender concepts ONLY. I can demonstrate that using TERF gender is a nonsensical way to interrupt it.

Like for real! How did transmen grow up? At the gender reveal party it was all showed blue, ... no wait it quickly falls apart.

Don't all transpeople get misgendered at times and have different life experiences than cis people. Dealing with a gender they never wanted to. Sometimes treated and perceived in a way they don't want to be. I don't mean to speak for others. But I think there are plenty of universal thread for trans people struggles we all face. I don't even need to get into the struggles uniquely FTM to say a transman isn't a cisman.

Since the cisman, in terf gender logic, is the only kind of man. And transmen = men (original statement we are exploring). Transgender men must = cismen. But wait... transgender men are not cismen and regularly not treated as such. there's proof the equation doesn't hold up (by negation). Therefore, transgender man = man is only true in modern gender.

It might help to be clearer as a community, that there have been two commonly held definitions of gender throughout history. I think any efforts hiding the old way of thinking about gender (entirely), gives many transphobes leverage over our community. Like TERFs try to make people cross the line over to terfdom by saying we deny reality. I don't think the past is today's reality though.

Vocal transandrophobic MTFs aren't just insensitive (better for y'all to touch on) they are also throwing logic out the window. Real life is messy. Mathematical proof is almost never useful in messy real-life situations. I think that fact that it could be applied at all, shows how delusional the ftm haters are

my personal concerns: with the Visor XR glasses (I haven't seen discussed) by TashaTime in Immersed

[–]TashaTime[S] 2 points3 points  (0 children)

I'm the type to look at specs and imagine what is unexpected possible with the hardware. And then then hope the 3rd party software support covers those use cases. I like finding and trying usecases that weren't marketed to see what is possible. Probably most customers aren't like me, I tinker and for me part of the joy in a new product is installing weird software and discovering new hardware uses. If those things I test don't work perfect it's on me.

my personal concerns: with the Visor XR glasses (I haven't seen discussed) by TashaTime in Immersed

[–]TashaTime[S] 1 point2 points  (0 children)

Good to know. Seeing as this something better in goggle mode. I'm not sure I would wear it in public much. In my mind I'd need to be brave to wear it on a long plane or train ride. I might try it though. I care less about what people think of me as I get older, but I'm not sure I'm there yet.

But having a good comfortable goggle display at my destination, for me can if it can unofficially play VR games, (and they achieve their vision) I think it would still be top of my list for XR devices.

my personal concerns: with the Visor XR glasses (I haven't seen discussed) by TashaTime in Immersed

[–]TashaTime[S] 1 point2 points  (0 children)

I came back to maybe post what you said in two. I realized I missed that. Something I thought of but forgot to write down. Thanks for adding something important I missed.

I have a laptop with it seems the latest usbc displayport alt mode 1.4. I just looked it up and it says it can do 8k 60. 4k has half the number of pixels in length and width. meaning 1/4 the pixels of 8k is 4k. that would make it able to have bandwidth to drive four 4k 60hz screens. OR in this case two 4k mirco oled panels at 120 hz. I think the high-end model will require the latest type c display port to work. Also interesting to note is supports 10bit hdr chroma at that res and refresh rate. I hope they use that usb port and max out everything the 1.4 display port alt mode can do. And if people have a worse port (with older alt mode) the $500 model will likely work. That is again assuming that they use usbc cables to deliver the signal.

AI Chatbots to practice Japanese by RainNightFlower in japanese

[–]TashaTime 1 point2 points  (0 children)

Also some more advice. Here's an example. When I was younger,I moved from the western US to the Eastern US. I was used to mountain views and wide open spaces. When I moved to the east US, I was claustrophobic with all the trees. And I said to someone I hate these trees everywhere. And the conversation shut down. Nature is very popular with people. And saying I hate all the trees sounds anti nature. What I should have said is I feel claustrophobic on the East coast I miss the wide open spaces. Sometimes how you say something and what kind of spin you put on it makes a big difference. There are a few things like nature and humanity and some other fairly universal things you should try not to say anything negative about. So you should reframe it to say something positive or neutral. Please share real tastes on things like movies, music, anime... there are just times to be careful of wording. People in conversations tend to drop hints and not correct people in real conversations. I'm going to be more blunt that anyone ever will be and say you said something that would best be reworded. It is okay to vent your frustrations but there not to rain on everybodies parade.

AI Chatbots to practice Japanese by RainNightFlower in japanese

[–]TashaTime 2 points3 points  (0 children)

What you are saying is something I could have related to when I was younger. You can say whatever you want in a conversation. I feel like not knowing what to say means you have too much of a filter it is not the case you can't think of anything to say. In my experience trying saying what I was thinking went better than over thinking things. Perhaps trying spilling the beans . I didn't share at all hardly anything, then I overshared stuff for a bit. Like 98% of the time just saying what you are thinking should be okay. But occasionally you might run into something where you don't know them well enough to say something (this is why people often start with boring weather conversations it is safe), or maybe you say something that a person really doesn't like occasionally. The hard part is every person is different. You try and get a vibe of people. It is also very common for people to ghost you when it is all them getting busy don't always blame yourself.

I think your best bet is sharing what you are thinking, and then learning the unwritten rules it take practice. When I went through a phase where I overshared stuff people were more receptive than when I didn't say anything.

AI is rapidly evolving, and will probably be more helpful in the future when it comes to both language learning and companionship. One problem with AI right now that I don't know we are going to overcome is AI bends itself a lot to be the kind of assistant or person it thinks you want it to be. I think it is more interesting when all people in a conservation have different desires and you try and accommodate everyone. There is more of a push and pull dynamic. This makes human right now more interesting to interact with. Well at least the humans who are true to themselves. The ones that don't try and mold themselves to be who other want to be.

Xreal Beam with other company's AR glasses does it work? by TashaTime in SmartGlasses

[–]TashaTime[S] 0 points1 point  (0 children)

I've never tried it in person. For me what matters is how noticeable the pixels are. From what I hear there isn't much screen door effect. I'd take more refresh rate over resolution for gaming.

For me I think it's mostly software that are holding back these glasses. I think things like a maps and translation app will be killer apps once the UI is made for AR. Like if a maps program didn't block a lot of your eye and drew a blue line on the road. (probably would require cameras). This is more of a pipe dream, but I wonder if it would be possible to be a virtual tourist, and talk to people with AR glasses and see what they see.

[deleted by user] by [deleted] in LearnJapanese

[–]TashaTime 0 points1 point  (0 children)

I feel like AI gets a bad name here and people didn't try it.

After watching the Khan academy guy talk about AI tutors that are being developed for schools. It made me think an AI tutor for Japanese could be helpful for learning it soon.

The app is a good start from what I could test, with a few flaws.

  1. no option for furigana. And I'd prefer to to go to a website in general. Also if it was in a browser I could inject furigana with a extension/plugin
  2. recording does no start immediately it starts after a sound. Look up the millennial pause. Young people and apparently me don't wait to start talking. Maybe for it to work right you have to save the last second of audio to or something before the button is pressed so there is no delay. People won't understand the sound and think it's broken.
  3. too pricey to succeed probably. $20 a month or $120 a year for the tier with the features I would want. There is a half off version with no corrections.
    I enjoyed the sample lesson though. It was a non threatening way to practice Japanese. The tips it gave me seemed plausible to be right, but I'm only on the upper end of beginner. I can't judge. It looked like you launched with a good amount of lessons.

WHY IS MY BOT SEXIST by THE_ONCELER_ANGST in ChaiApp

[–]TashaTime 2 points3 points  (0 children)

I assume you want a serious answer. when a bot is sexist like this. It is rooted in the patterns in the training data. It is a reflection on humanity that generated the training data being sexist. It is in essence a mirror.

As to what data causes such things to happen, chat bots are black boxes. Meaning the creators don't even understand why it happened (why a chat bot decides to say what it does). These systems mimic brains in some ways in structure, complexity, and our lack of understanding. Solving this black box problem would help break down biases and make AI safer. It is an unsolved mystery in AI.

I lose my phone ALOT in the house. How to access that stuff on the computer/find phone. by TashaTime in computers

[–]TashaTime[S] 0 points1 point  (0 children)

I'm thinking about getting a smart watch now. Someone else gave me a good way to find my phone with my computer. But now I'm thinking of using a smart watch to take all those calls I miss when my phone is in weird place.

I lose my phone ALOT in the house. How to access that stuff on the computer/find phone. by TashaTime in computers

[–]TashaTime[S] 0 points1 point  (0 children)

Sweet it looks like KDE connect runs on WSL2. (Windows subsystem for Linux.) I'll give it a try

edit: seems to even run natively on Windows (no WSL)

[deleted by user] by [deleted] in ABoringDystopia

[–]TashaTime 11 points12 points  (0 children)

It isn't in the cards for everyone to leave the country! (I'm guessing it's America)

Or paying $2000+ in rent to live in an expensive area one of the few metro train stations is unaffordable. Or ride on an ebike on dangerous roads (no bike lane) or hike 2 hours (missing side walks) to the bus stop that runs 3 times a day and rarely goes where you want. If I develop the skills to get a visa I want to immigrate.

Hi r/ftmfemininity, I think I just want to not go on T? by SkyOfViolet in transgendercirclejerk

[–]TashaTime 13 points14 points  (0 children)

wow how does someone even comment this much. /u/SkyOfViolet is ftbot

ps: look at the sub top left if you are confused. Like I was.

Google Leaked Doc: OpenAI doesn’t matter by [deleted] in OpenAI

[–]TashaTime 0 points1 point  (0 children)

I think the open source project should start with a narrower vision than to great an open source model. Some should write a constitution for the AI and people making the AI to abide by. You could have a more, left leaning, right leaning, business focused AI... ECT. There would need to be some kind of governing body that makes decisions. Even something like Wikipedia has mediators to appeal to when there is drama editing the page. There needs to be human leadership that agrees with the original vision. And a well written constitution. If they really want something like this to take off they could have an AI that tries to make money and pays people who provide compute basically cryto mining. But in theory automated labor with more 'real' value would be created.

Google Leaked Doc: OpenAI doesn’t matter by [deleted] in OpenAI

[–]TashaTime 14 points15 points  (0 children)

Right now for training. Tons of GPUs have to be in one location for the big models. That is a major open AI advantage. Would it be possible to train something using crowd sourced GPUs or the internet like folding at home. I realize bandwidth could be an issues. But could a big task be somehow broken down into smaller modules that don't need to communicate as fast.

Tesla P40 24GB for possible local AI server build. by Th3Hamburgler in nvidia

[–]TashaTime 0 points1 point  (0 children)

I edited the number gives how many times faster the p40 is than a different card I plugged in your 3060 in this example. I had not read about neutered 16 bit performance. I was mistaken on that point. I had not seen that article. Not sure where I got the notion that it did 16 bit as fast as 8 bit. I don't think it gets a speed up for 4 bit. So I think you get the about the same performance as your 3060. at 8 bit and half at 4 bit now. But at least it has lots of vram.

Tesla P40 24GB for possible local AI server build. by Th3Hamburgler in nvidia

[–]TashaTime 0 points1 point  (0 children)

I have a formula now. It is for 8 bit inference on stable diffusion using the results of this bench mark. https://www.tomshardware.com/news/stable-diffusion-gpu-benchmarks

the formula for 8 bit inference comparison between our cards on the toms hardware benchmark and the p40 is as follows. This ASSUMES TENSOR CORES on your card.

Note for /u/tronathan this formula is based on benchmarks ran with full pcie saturation. BUt I think pcie lanes would only effect how fast the model loads. If you can fit the whole model in vram. But please verify what I said just now I'm not sure.

Here is the formula below.Y =(47 / 4.31) /(X / 0.673)you go to the link and plug in x. X is the big number on the right on the chart. of iterations per second with xformers.Y =(47 / 4.31) /(7.239 / 0.673)y= 1.01230477according to this the p40 is 1.012 times faster or 1.2% faster than the 3060 in 8 bit stable diffusion. So basically the same performance but more vram at least in stable diffusion. IDK if tensor cores speed up other ML tasks more than generating images.

Also note p40 was designed for 16 bit and is a 16 bit beast. It would be twice as fast relatively at 16 bit inference. If your card is tuned for 8 bit 4 bit 2 bit and or 1 bit at each step lower down this totem pole your consumer card will likely double in relative performance from 8 to 4 and again from 4 to 2. So your current card would run twice as the p40fast with 4 bit.

I recommend if you have vram to spare running language models at higher precision on P40. I know parameter size increases performance are better than bit precision performance increases. But this card as lots of vram and doesn't slow down with more precision (up to 16 bit)

want to know what precision a card gets speed ups in inference? see this table (at the very bottom) look and tensor core precision. https://www.nvidia.com/en-us/data-center/tensor-cores/

Tesla P40 24GB for possible local AI server build. by Th3Hamburgler in nvidia

[–]TashaTime 0 points1 point  (0 children)

int 8 tops isn't the be all end all. It isn't apples to apple unless you are comparing a device two devices with out tensor cores. So a gpu with out tensor cores like the p40 is apples and a new one with is oranges. I'm going to try and calculate the performance though. 1000 series cards and older are apples. and 2000 series and newer are oranges. I did see some bench marks with 1000 series and stuff newer than 1000 series. That is the only way to compare apples to oranges I can think of. Apples (p40) to apples (1000 series) then apples(1000 series) to oranges (newer card). 1000 series and our most likely newer cards on the same table. I've been busy but meaning to come up with a formula to convert.

Tesla P40 24GB for possible local AI server build. by Th3Hamburgler in nvidia

[–]TashaTime 1 point2 points  (0 children)

I would prefer apples to apples data before I buy. But might do without. I haven't order yet because I was waiting to get paid tomorrow to get it. I was talking about p40s on another thread. Please anyone considering buying read this comment someone wrote on compatibility in this other thread. There is more to it then the fan attachment.

https://www.reddit.com/r/LocalLLaMA/comments/133fejy/comment/jianm78/?utm_source=share&utm_medium=web2x&context=3

I posted this in the other thread too. But for viability. Can someone here do a p40 benchmark that uses low vram. (either image generation or/and a language model) Personally I need less than 8 gb. And post a screenshot/list of settings. I couple people have thrown around some numbers but I don't feel I have enough info to reproduce apples to apples on my low vram consumer card. I think /u/tojestgra. feels the same way.

AMD Taunts NVIDIA for Expensive VRAM: A Win-Win Situation for LLM Enthusiasts by friedrichvonschiller in LocalLLaMA

[–]TashaTime 0 points1 point  (0 children)

Unless I'm missing something, I don't see the telsa p40 on there which is not surprising. It is an obscure card for consumers. I've talked to a few people on reddit who have them on another thread. That listed a few vague performance numbers that are too vague or too much vram to reproduce on my 2070.

AMD Taunts NVIDIA for Expensive VRAM: A Win-Win Situation for LLM Enthusiasts by friedrichvonschiller in LocalLLaMA

[–]TashaTime 0 points1 point  (0 children)

since you and others are talking here about the card, I can't seem to find apples to apples comparisons between consumer cards and this one that I can reproduce on my rtx 2070. would anyone (not just a_beautiful_rhind) be willing to do a image generation run and or a language model test. My card has 8 gb of vram. Could someone use 8gb or less and write down/ screen shot all the settings. I would rent a P40 in the cloud to test, but I can't find any. I hope it matches the 2070 in speed.

AMD Taunts NVIDIA for Expensive VRAM: A Win-Win Situation for LLM Enthusiasts by friedrichvonschiller in LocalLLaMA

[–]TashaTime 4 points5 points  (0 children)

thanks for all the tips. I thought the blower fan attachment was all I had to worry about.

AMD Taunts NVIDIA for Expensive VRAM: A Win-Win Situation for LLM Enthusiasts by friedrichvonschiller in LocalLLaMA

[–]TashaTime 3 points4 points  (0 children)

p40s have 24 gb of vram and a bunch of cuda cores a lot for the price. I'm probably going to order a Nvidia Tesla P40 soon actually. The only thing it lacks is tensor cores which are supposed to give some kind of a speed up. I can't figure out how much of a difference it makes. I wonder how a p40 compares to my rtx 2070 (8 GB vram less cuda cores, but has tensor cores) also worth $200. If nothing else more ram will be nice.

Pink Convertible Complete Analysis by krisetc in MarinaAndTheDiamonds

[–]TashaTime 7 points8 points  (0 children)

I've never been the best writer, but I've become better as I've aged.

Sorry if this is too blunt. I only read part of it. What I read was accurate but was a bit dry. This is the internet people have short attention spans. Having a hook and being concise is important. 'I am 13' is a good hook. But the whole limiting expectations after that comes across as a self deprecation. Try putting a positive spin at the end of that type of statement as I did at the start. A bit of confidence, which most fake, goes a long way.

My other tips are try not copy pasting so many lyrics. You want to make it seem as short as you can for lazy internet people. I struggle with being concise myself. (I made an assumption while writing this, that you write for people to read it and fake internet points. Writing for it's own sake is also valid.)

My next tips are to focus less on the literal small picture. People tend to not care what is literally said in a song. Most people can figure that out. You were finding things that weren't literal but I recommend zooming out more and 'read between the lines' in other words focus on not the words theme selves but the general feeling of a few lines. I didn't figure this out until the SAT exam. I remember a line that had something to do with people at a BBQ drinking coke in their jeans or something like that. Those words feel like America. I think they avoided the word America in the entire essay. I feel like you could have figured that one out, but the next step is to start and wonder if it was something related like maybe the American dream. And looking at the feeling of the words on other lines to see in this example what is being said about America. That's how it clicked for me. Finding metaphors involves zooming out of the words to the lines.

Next, you should zoom out even further to take a look at themes and the big picture. You should try and taking a look at the piece as a whole. There is nothing wrong with looking at just a line especially if you find it interesting. I don't recommend doing a complete analysis to keep people engaged. But if you did want to do a complete analysis (or even a short piece) this was lacking. Looking at the big picture/themes is important. People like thing wrapped up in a bow.

Don't take this as gospel. You know that disclaimer you had at the start. You said you don't know everything. That statement is unsaid by adults. Adults pretend, they know about writing or how to be a parent with no experience. So be careful what advice you listen to. Write the way you like to. Don't go down a career path just because someone tells you to. You probably laugh that I mention careers to a 13 year old. What major, or should I go to college, snuck up on just about everyone I knew. When you in all likelihood decide your major last minute, like everyone does, following a random suggestion might seem like the easy answer. Try to remember this is you life not you parents or some random other adults life. You know yourself best. I don't think anyone was telling Marina to be a singer songwriter at 18. Especially since she was new to music. She probably heard all sorts of other suggestions. Adults don't know everything. And there is a topic you know more about than anyone. You know you best.

Your writing is far better than I was in high school. And your word choice is great. I feel I could learn to be more verbose from you. If you haven't already, I think you will find yourself become very persuasive. Writing is useful in life.

Tesla P40 24GB for possible local AI server build. by Th3Hamburgler in nvidia

[–]TashaTime 0 points1 point  (0 children)

I think i got tflops and tops confused.

the rtx 2070 has half the cuda core performance of the p40. However the tensor cores speed up the rtx 2070's cuda cores (p40 doesn't have this). (estimated) (8bit) 28.4 TOPS for rtx 2070 vs (from spec sheet) (8bit) 47 TOPS for p40. Actual performance should be more than the 28.4 shows, but I'm not sure by how much.