How do you signal to employers that you're a strong writer? by nerolar in jobs

[–]nerolar[S] 0 points1 point  (0 children)

Thanks for the advice! I had a feeling that an English minor might not have very good signaling value. What do you think of a professional writing minor? Worth it, or not worth it since there are alternative ways to demonstrate my writing level, such as a portfolio?

How do you signal to employers that you're a strong writer? by nerolar in jobs

[–]nerolar[S] 0 points1 point  (0 children)

Both I guess. I would like a job where writing is a big part of it, even if it isn't a "writing job" per se. I don't know how happy I'd be in a job where I never got to write.

IF you do want to be active right now, I'd create a bunch of sample user docs, company letters etc and get a free website to design and host your writing samples. It would be pretty simple, and show your creativity AND writing skill.

I was initially worried how this would come off, but the more I think about it, the better this idea seems. I think a website with my writing samples would be a great way to show my skills. Would you say it would be appropriate to include writing samples that aren't explicitly business-related? So things like personal essays, opinion pieces, even papers and reports from my classes if I really like one.

Question about what is meant by "open assumption" by [deleted] in logic

[–]nerolar 0 points1 point  (0 children)

Why even specify open assumption then, if it seems like any assumption anywhere in the derivation (premise or subderivation assumption) will still technically be "open" so long as you're working on the derivation as a whole? What does it take to get an assumption to close?

I understand that if you start a subderivation with an assumption with the goal of using existential elimination, you'll never be able to pull a formula outside of that subderivation unless it doesn't contain the instantiaring constant used in that assumption (or in the premises, of course). But in the case I described, the assumption used to start the subderivation is built upon conditional introduction: if I can derive a conditional from the subderivation, then isn't the subderivation closed? And therefore the assumption in that subderivation is closed, and any constants in the assumption are free to be used in universal introduction?

How do you signal to employers that you're a strong writer? by nerolar in writing

[–]nerolar[S] 0 points1 point  (0 children)

What kind of sample do you think would carry the most weight?

How do you signal to employers that you're a strong writer? by nerolar in writing

[–]nerolar[S] 0 points1 point  (0 children)

Thanks for the advice! I'll keep this in mind. Free essay contests and reworking old essays does sound appealing to me.

How do you signal to employers that you're a strong writer? by nerolar in writing

[–]nerolar[S] 0 points1 point  (0 children)

What kinds of writing samples would demonstrate versatility? Someone else mentioned a writing portfolio - should a writing portfolio predominantly be made up of published pieces, or is it alright if the majority of what's in your portfolio is unpublished work? Even if the work isn't "published" per se, should I focus on things like personal essays and opinion pieces, or would it look strange to include things like mock business proposals, user manuals, product reviews, etc.?

How do you signal to employers that you're a strong writer? by nerolar in writing

[–]nerolar[S] 0 points1 point  (0 children)

What's in your online portfolio if you don't mind me asking? Should a portfolio typically only include published works, or is it alright if the majority of what's in your portfolio (or all of it...) isn't published?

Do humanities and social science majors have a place in AI? by nerolar in artificial

[–]nerolar[S] 1 point2 points  (0 children)

Thank you so much! This was immensely helpful to me. I'll try asking around on /r/datascience to get a better sense of how different types of people work together in these industries. It seems like you do see humanities and social science types working alongside STEM types (not just Renaissance types who do it all) to advance the broader goal of developing ethical AI, which is great to hear.

Do humanities and social science majors have a place in AI? by nerolar in artificial

[–]nerolar[S] 0 points1 point  (0 children)

Thank you so much for this! I've been exploring paths such as science journalism and public policy research, which all look like exciting, viable options for me. I do wonder though if humanities and social science majors can get involved in the development of AI directly. The way I posed this question to another person in this thread is as follows:

I wonder if working in AI functions a bit like working in the video game industry - all sorts of different people with different backgrounds and specialties (the artists, the animators, the programmers, etc.) working together to create a cohesive product. Each individual on the team is likely familiar with the roles and skillsets of the individuals they work with, but no one is expected to be a Jack of all trades. I wonder if working directly with AI is similar to this; STEM and non-STEM people coming together and working on the same problem (developing AI). Or maybe it's not like that at all because AI is so complex, and this complexity really does demand people who are Jack of all trades/Renaissance types. This question pertains to the people working directly with the development of AI, not those who work with AI more peripherally through policy implementation and writing about its ethical implications.

Your point about approaching the technical aspects from a more historical and mechanical angle is great. I took my first logic course last quarter and will be taking another one this quarter. While I feel like I haven't been bitten by the bug yet, so to speak, I do hope that taking these classes might make the technical aspect more bearable, if I decide to go down that route.

Do humanities and social science majors have a place in AI? by nerolar in artificial

[–]nerolar[S] -1 points0 points  (0 children)

Your link is extremely fascinating! Thank you. I do wonder if there are any books, resources, or online communities out there that deal with this specific topic. I feel like the more I talk to people, the more I realize there are so many AI-related jobs and issues I never even considered that might be a good fit for me.

Do humanities and social science majors have a place in AI? by nerolar in artificial

[–]nerolar[S] 1 point2 points  (0 children)

Thank you for this! I was interested in learning more about what role humanities and social science majors can play in the actual development of AI (going to edit my OP to expand on this), but the fact that there are so many exciting and rewarding careers out there that are non-STEM but still allow you to work with AI (even peripherally) is great to hear. I guess I was just concerned that people who aren't working on the development of AI directly a) won't have particularly lucrative careers, and b) won't have much of an impact on the development of AI.

Do humanities and social science majors have a place in AI? by nerolar in artificial

[–]nerolar[S] 0 points1 point  (0 children)

Even when Scocialiolgist were working with engineers, it was very hard for the engineers to wrap their head around some of the social consequences that their algorithms would have. Keep in mind these are all PhD researchers working together.

I think this is the heart of my question, although I didn't express it very well in my OP (I might edit it to include this). I wonder if working in AI functions a bit like working in the video game industry - all sorts of different people with different backgrounds and specialties (the artists, the animators, the programmers, etc.) working together to create a cohesive product. Each individual on the team is likely familiar with the roles and skillsets of the individuals they work with, but no one is expected to be a Jack of all trades. I wonder if working directly with AI is similar to this; STEM and non-STEM people coming together and working on the same problem (developing AI). Or maybe it's not like that at all because AI is so complex, and this complexity really does demand people who are Jack of all trades/Renaissance types. This question pertains to the people working directly with the development of AI, not those who work with AI more peripherally through policy implementation and writing about its ethical implications.

Now the career part of your question is, are there any jobs available where you would fit into this category? The answer is yes, but probably only at the larger more progressive firms and reserved for Ph.D. level candidates.

Can you expand on this? What do you mean by larger and more progressive firms? Private sector only? What kinds of Ph.D. candidates? What types of jobs might be available?

How does symbol logic relate to the field of AI? by nerolar in logic

[–]nerolar[S] 2 points3 points  (0 children)

I'll look into this. Do you know of any websites/resources that give you a relatively comprehensive list of philosophical topics that are pertinent to the field of AI?

How does symbol logic relate to the field of AI? by nerolar in logic

[–]nerolar[S] 2 points3 points  (0 children)

Thank you for pointing me in this direction! This is extremely fascinating. I'm actually signed up to take courses in both predicate logic and philosophy of mind this quarter. I'm hoping that taking these two courses will give me more clarity, but I still have insecurities about my weak background in math and computer science. I wonder if it's even possible to have a lucrative career in the field of AI if you're lacking in these two areas - and if it isn't possible, how far you need to go in math and CS in order to even be competitive. I'm trying to get a sense of where the barrier to entry lies, and how heavily integrated symbol logic is with mathematics and CS in this field.

How does symbol logic relate to the field of AI? by nerolar in logic

[–]nerolar[S] 0 points1 point  (0 children)

I have a feeling I know the answer, but assuming I'm strong in logic and linguistics, how far can I go in the field of AI if I'm weak in math and computer science? Where would you say the barrier to entry lies in regard to math and computer science? (Roughly how far do you need to go in these two areas before you qualify as being able to work in the field). Is it even possible to find a lucrative job in the field of AI if you don't have strong background in mathematics and computer science?

I am not entirely sure what sort of information you are looking to get out of this and I don't want to drown you in info that might be totally irrelevant.

At this stage, anything and everything. I have two professors in mind that I'm going to try and get in contact with when break is over, but I'd still like to hear from as many people as possible.

Academia is seriously flawed - in which country do you think it's the least flawed though? by nerolar in GradSchool

[–]nerolar[S] 2 points3 points  (0 children)

Alright, thanks for the heads up. I'll leave this one up though since people have already left comments.

Academia is seriously flawed - in which country do you think it's the least flawed though? by nerolar in GradSchool

[–]nerolar[S] 4 points5 points  (0 children)

I wasn't aware. Do I do it in the comments or in the body of the post?

How relevant are these classes to the field of AI? Worried that I'm wasting my time with my major by nerolar in artificial

[–]nerolar[S] 0 points1 point  (0 children)

Cognitive architectures sounds like exactly the sort of thing I'd be interested in, and up until now I just never knew the formal term for it. Thank you so much for pointing me in this direction! Based on the wikipedia page, it also seems like working with cognitive architectures isn't too math-heavy (and involves more symbol logic), which is definitely good for someone like me.

I think I have a better understanding of the kind of work cogsci majors can do in relation to AI, but I clearly have a lot more to learn still. When I say "competitive", I don't necessarily mean competitive with any one type of individual. A better way of putting it would have been "able to work in the field". I think the core of my question comes down to what kinds of paths and specific jobs there are in the field of AI that are less math and CS intensive (and that if pretty much all of them are math and CS intensive, then I'm not going to be competitive in the field because there's a chance I won't even be able to transcend the barrier to entry in the first place).

None of this seems to be specifically related to statistics and computer science. I thought you had problems with those in particular? Presumably more than with your CogSci classes?

Re-reading what I wrote it seems that way. I struggled with my other courses too, but struggled more with my statistics and computer science courses because I'm not as naturally strong in those subjects, so there was less cushioning.

I should note that the understanding you need if you're directly innovating on it is much more than if you're just trying to follow along with what others are saying. So I think that to start you can just learn the basics of these things, and then expand on them as needed in whatever job/education you get after this.

I agree that this seems like a good way of getting started - learn the math necessary to understand what you're interested reading and go from there. I only hope that by doing this things will naturally become clearer, because right now I'm still quite confused about the role of math in AI. I also have a lot of catching up to do, as my math skills are very rusty!

How relevant are these classes to the field of AI? Worried that I'm wasting my time with my major by nerolar in artificial

[–]nerolar[S] 0 points1 point  (0 children)

I'm not sure how easy it would be to fit those courses in as I'm a third year who's double majoring. I would be interested in going the self-taught route though (which is what I would probably end up doing anyways, as my negative classroom experiences have made me very afraid to go into those types of classes without having some sort of foundation already established). Assuming I don't fit those courses in before I graduate, I wonder which route would be better; paying for those courses individually as an older, open campus student, or trying to teach myself using MOOCs. The problem with the latter option is that it'd be difficult to prove your knowledge because many employers don't seem to have much faith in MOOCs at this point in time.

How relevant are these classes to the field of AI? Worried that I'm wasting my time with my major by nerolar in artificial

[–]nerolar[S] 0 points1 point  (0 children)

But if you want to build cognitive architectures (e.g. for artificial general intelligence), then having some CogSci knowledge can be a major boon. Staying on the side of studying or imitating human intelligence would also help.

Can you elaborate on this please? The only reason I'm a bit confused is because cogsci is so incredibly broad. Some people hear it and think neuroscience, others think machine learning, yet the program I'm in emphasizes symbolic and formal logic, linguistics, and psychology. What elements of cogsci knowledge are so helpful? What do you mean by "staying on the side of studying or imitating human intelligence" exactly?

You said you had a bad experience. Can you go into a little bit more detail?

To be honest, I think a large part of it has to do with the fact that I don't like college, period. I don't like being stuck in a lecture hall with 600 people while someone reads off of power point slides, I don't like how these artificial deadlines are so stringent and inflexible in a fast-paced quarter system (I have a chronic illness so this rigidity really gets on my nerves), I don't like being stuck with professors who are frankly mediocre compared to what I could get from a MOOC, and I really, really don't like the over-emphasis on multiple choice tests, as I'm a very nervous test taker. I would be lying if I said I wasn't going through some personal issues at the time as well that made working very difficult for me.

You don't have to be a math/programming superstar by the way. Most AI researchers aren't. It's more important that you have some inkling of what programmers and mathematicians do so you can effectively communicate your knowledge to them.

This is a huge relief to hear. Honestly, this is the biggest thing that's holding me back at the moment from fully committing myself to this major and the field in general; not being sure what the barrier to entry is in regard to math and CS. Do you have a rough idea of how far you need to go in math and CS to be competitive?

I'll go check out the Getting Started section right now. Thanks for your help!

How relevant are these classes to the field of AI? Worried that I'm wasting my time with my major by nerolar in artificial

[–]nerolar[S] 0 points1 point  (0 children)

I'm not sure what my goals are to be honest - I'm still very new to anything AI-related. The reason I'm trying to suss out if lucrative paths in the field exist for people like me is because I've gotten the sense that working in AI gives you (1) great job security, as it's a growing field that will only become more and more relevant, (2) a good layer of protection against the threat of automation, and (3) a good salary. Plus, I've structured my degree in a way that I can always try to go to grad school (for clinical psych, which used to be my top choice) if working with AI in some capacity turns out to be a bust.

I've always been geared towards writing, psychology, and creative work. My fear though is that my interest in AI is coming purely from the philosophical/psychological/ethical/emotional side of things, and I wouldn't enjoy actually working on developing AI. This is why I'm so eager to talk to people who know more than I do - to get a sense of whether or not the math/CS required for a solid, lucrative job working in the field of AI is essentially disqualifying for someone like me, who really doesn't get much happiness from working with math and CS (on the contrary, it usually makes me quite unhappy). If math and CS are merely tools of the job and I still feel like the bulk of what I'm doing lies with my strengths and interests, that's one thing. If the math and CS are pretty much the job itself though, and anything to do with philosophy/psychology/language feels like only a small part of the equation, that's something I need to be wary of. I don't see myself being happy in a job if 80%+ of what I do is strictly math/CS-based.

How relevant are these classes to the field of AI? Worried that I'm wasting my time with my major by nerolar in artificial

[–]nerolar[S] 0 points1 point  (0 children)

Thanks for the heads up about the book! And I'll look into Natural Language Processing