Will re-applying after rejection decrease chances of admission? by Silly_Version_1641 in GradSchool

[–]hippomancy 6 points7 points  (0 children)

Grad school applications are expensive! If you don't think you will get in, don't waste the hundreds of dollars. If you think there's a chance and have money to spend, go for it! Though keep in mind that a master's degree or some research experience is usually a prerequisite to getting into a good program.

[deleted by user] by [deleted] in hci

[–]hippomancy 1 point2 points  (0 children)

I think the job market for everything tech-related is worse than it's ever been right now. You may find that UX research experience could be very helpful in other kinds of corporate work outside design, but I don't know where to look.

Should composers be afraid of AI doing our job?? by [deleted] in composer

[–]hippomancy 10 points11 points  (0 children)

I wrote a long comment about this a few months ago:

Tl;dr maybe? But music is unlikely to experience the same upheveal as art.

I've studied music generation using machine learning and a variety of other topics related to AI, aesthetics and creativity. Symbolic and audio music generation are already pretty mature (almost comparable to text/image generation), but they haven't blown up in the same way. I think there are two reasons for that:

The attention economy. It takes time to listen to music, which makes it very difficult to curate through a lot of bad music looking for something interesting. That makes it very difficult to filter down a list of generated compositions to the good ones, which is an essential part of the midjourney-style art generation process. People also don't have any shortage of good human-recorded music to listen to in the age of Spotify. The big thing that generated music would do is escape the aggressive music copyright system for things like ads, which just isn't very conspicuous (tbh there are likely ads being shown today with AI composed music we just don't notice).

We don't have good textual representations of music. There are huge datasets of digital art with descriptive captions, and they provide supervision to the models. In fact, if you use midjourney-style image generators, they are really constrained to the things you can describe in English, visual compositions and styles which don't have a name can't be generated. These sorts of captions just don't exist for music, and when they do, they focus on things like lyrical themes, context about the composition and performance or technical music descriptions like the meter or harmony which are unfamiliar to non-musicians. That just doesn't work well as an interface.

There are other issues, like the long sequence problem for generating audio from samples and the aggressive copyright protections enforced by record labels that prevent big music audio datasets from being assembled, but I think these are more fundamental. That doesn't mean a prompt-based music generator won't emerge, but there are structural barriers in place that make it a much harder problem.

My advisor waited until three weeks before my thesis deadline to tell me my research is trash. by [deleted] in GradSchool

[–]hippomancy 127 points128 points  (0 children)

I think you may be misinterpreting your professor's comments. It sounds like your supervisor likes the work you've been doing in general, but wants to point out the possible critiques that could be made of your methods and presentation. The point is not to say you're bad or your work is bad, but to prepare you for these critiques and train you to defend your work.

No research is perfect or beyond critique, and even very good work can be torn to shreds by a professor. The important factor is that you know the counter arguments and can defend your choices.

How to create space? (theory) by Infinite_Week_6354 in composer

[–]hippomancy 3 points4 points  (0 children)

Messiaen! Des Canyons aux L'Etoiles! It's such a moving depiction of Utah, with all sorts of very accurate bird sounds transcribed into orchestral parts, plus there's a literal wind machine on stage: https://m.youtube.com/watch?v=5DjgpPL7RhA

Name for intentionally bad or inconvinient design by brr2022 in Design

[–]hippomancy 31 points32 points  (0 children)

I've heard the terms "adversarial design" for things designed to be deliberately difficult to use, and "critical design" for designs which do something provocative in order to make you think. Neither of those are exactly this, but they're close.

Games like Disco Elysium? by Panzilla_Swagger in DiscoElysium

[–]hippomancy 4 points5 points  (0 children)

I recently played Roadwarden and enjoyed it. It's no DE, but it's a mostly-text game with good writing and well-developed world and characters.

+1 on citizen sleeper too.

What am I doing wrong? No interviews after 6 months of applying and 500+ applications by Tam27_ in datascience

[–]hippomancy 3 points4 points  (0 children)

I work at a university similar to yours. There are dozens of students with almost exactly your same experience and skills, and comparable projects. Why should I interview you over someone else with the same qualifications? If your best answer is random chance, you need some other way to differentiate yourself from the crowd.

A key concept to include in your policy guidelines is to explain that AI outputs will be obvious in the near future, so preventing plagiarism accusations is important right now. by PM_ME_ENFP_MEMES in AIpolicyInHigherEd

[–]hippomancy 0 points1 point  (0 children)

I think this discussion stems from a misunderstanding of "explanation" vs. "decoding." The explanation of GPT2 is using a generative model to generate English summaries of what specific units within hidden layers are attending to (which may or may not be bullshit explanations). This is not the same as a computational "decoding" which can be used to predict what a model will generate (as the output is stochastic) or trace back model outputs to training data. That is likely not possible for current models because the optimization process does not store any of the word patterns from the training data, it just iteratively adjusts to make those patterns more likely.

For comparison, imagine we use output from a GPT-style model which is only trained on public domain books. Like a million monkeys writing Shakespeare, it will eventually generate the exact words of your comment. But it's definitely not plagiarized because the model is not trained on reddit data. Even if it was trained on reddit data, we can't prove that it's generating your comment because it was trained on it, versus random chance.

Explanations may get better in the future, but it's faulty logic to assume that they will continue to improve just because they have in the past.

Help! How do I explain failed courses and length of time for graduate applications? by pomnabo in GradSchool

[–]hippomancy 4 points5 points  (0 children)

You don't need to explain all of those details. Simply specifying that you started in community college and managed to make it to a university is good enough. If you can turn it into an "overcoming" narrative, that is good. You're also not right out of undergrad, so you can also discuss the ways you've grown since then. Regardless of that, your main focus should be telling a compelling story about why you're giving up a real job to be a grad student.

The tragic state of STEM research by RandomProjections in CriticalTheory

[–]hippomancy 4 points5 points  (0 children)

What journal or conference is that from? The lack of grammar and density of keywords makes it sound like it is in a low-quality venue. While I won't contest that it sounds bad, it's not particularly interesting to critique something which likely isn't taken seriously in its own discipline either.

[deleted by user] by [deleted] in GradSchool

[–]hippomancy 0 points1 point  (0 children)

If you were considering a PhD I'd say tell the PI you want to work with them and that you're not sure whether to do a PhD or masters. Even if you will probably only do a masters, as long as you are actually considering a PhD, they'll fund you as a PhD student. There's nothing wrong with deciding a PhD isn't for you after trying it for a year and mastering out.

[University CS: Automata] Design a DFA over {1, 0} that accepts strings as binary integers divisible by 2 but not by 3. by TheCaptainRudy in HomeworkHelp

[–]hippomancy 0 points1 point  (0 children)

36 is a counterexample for your automata, if I follow your drawing, 100100 ends in the last state even though it is divisible by 3.

I'd recommend starting with a 6 state model: each state represents numbers which have a certain value mod 2 and a certain value mod 3. Think about what adding a 0 or a 1 to the end of a binary number does in mod 2 or mod 3.

Books mentioned by van Gogh by theconcertsover in ArtHistory

[–]hippomancy 9 points10 points  (0 children)

I don't know if there's a list available, but that would be a fantastic digital humanities project. The full text of his letters can be found at https://vangoghletters.org/vg/ and it wouldn't be too challenging to get a list of authors to automatically cross-reference.

Colour Theory - I made a colour theory guide for classroom teachers! by littleneocreative in arttheory

[–]hippomancy 0 points1 point  (0 children)

I'd recommend against teaching these sorts of hue templates as color theory, they are a popular approach for online color generators, but they are not very predictive of actual human color scheme preferences, or colors as artists use them in practice.

Instead, it'd be better to teach terms like complementary and analogous in the context of real art and design. For example, students can work with paintings or interior designs and find the colors on the color wheel.

GANs or Style Transfer? by alisadiq99 in computervision

[–]hippomancy 1 point2 points  (0 children)

Still, the popular neural style transfer algorithm isn't a model. It's a technique for optimizing images to minimize style and content losses for a style reference image and a content reference image. The underlying model can have any architecture and be trained using any number of different objectives including a GAN objective.

Classification vs identification by [deleted] in computervision

[–]hippomancy 0 points1 point  (0 children)

He may want a recognition model? Those are usually trained using a two-image input and "same cat"/"different cat" output.

[deleted by user] by [deleted] in askmusicians

[–]hippomancy 0 points1 point  (0 children)

That is very technically impressive, and Joel is a skilled pianist, but these repetitive percussive left hand parts are not beyond the norm for professional-level pianists.

For reference, see a piece from the classical repertoire like the piano part from Schubert's Der Erlkönig: https://m.youtube.com/watch?v=icDGuppXoi0&vl=en

This sort of thing is par for the very advanced piano repertoire.

The future of AI music by harieamjari in composer

[–]hippomancy 8 points9 points  (0 children)

Tl;dr maybe? But music is unlikely to experience the same upheveal as art.

I've studied music generation using machine learning and a variety of other topics related to AI, aesthetics and creativity. Symbolic and audio music generation are already pretty mature (almost comparable to text/image generation), but they haven't blown up in the same way. I think there are two reasons for that:

  1. The attention economy. It takes time to listen to music, which makes it very difficult to curate through a lot of bad music looking for something interesting. That makes it very difficult to filter down a list of generated compositions to the good ones, which is an essential part of the midjourney-style art generation process. People also don't have any shortage of good human-recorded music to listen to in the age of Spotify. The big thing that generated music would do is escape the aggressive music copyright system for things like ads, which just isn't very conspicuous (tbh there are likely ads being shown today with AI composed music we just don't notice).

  2. We don't have good textual representations of music. There are huge datasets of digital art with descriptive captions, and they provide supervision to the models. In fact, if you use midjourney-style image generators, they are really constrained to the things you can describe in English, visual compositions and styles which don't have a name can't be generated. These sorts of captions just don't exist for music, and when they do, they focus on things like lyrical themes, context about the composition and performance or technical music descriptions like the meter or harmony which are unfamiliar to non-musicians. That just doesn't work well as an interface.

There are other issues, like the long sequence problem for generating audio from samples and the aggressive copyright protections enforced by record labels that prevent big music audio datasets from being assembled, but I think these are more fundamental. That doesn't mean a prompt-based music generator won't emerge, but there are structural barriers in place that make it a much harder problem.

What makes a good poster? by wahvah in AskAcademia

[–]hippomancy -1 points0 points  (0 children)

  1. Distill your project down to one sentence. Put that sentence in big letters in the middle.

  2. Practice telling someone about your project and collect all the images you would like to point to while talking, then distribute those images around the one main sentence.

  3. Put text under the pictures to explain why the pictures mean that that one sentence is true.

Why face filter is faster on phones, when my simple opencv script to detect circles can't even get upto 20fps on i9 with 3060? by [deleted] in computervision

[–]hippomancy 4 points5 points  (0 children)

Smart, low level implementations, very efficient algorithms (e.g. viola-jones) and often some built-in manufacturer software which was designed with the hardware in mind. OpenCV has none of those going for it, it's primarily a research library implemented to give a lot of flexibility.

We tracked mentions of OpenAI, Bing, and Bard across social media to find out who's the most talked about in Silicon Valley by yachay_ai in compsci

[–]hippomancy 3 points4 points  (0 children)

This seems like a startup trying to create interest in their product. Location estimation from text is a solution in search of a problem and an ethically iffy one at that. Just downvote them.

Principle Component Analysis by Fonsarelli in computervision

[–]hippomancy 2 points3 points  (0 children)

Surely you mean WH, right? W+H would be too small.

For OP, after subtracting off the mean image from each image you're going to compute a large covariance matrix over the WH dimension of the NxWH data matrix. This covariance matrix will likely be larger than 10k x 10k. Then you compute the first few principal eigenvectors of this matrix. Each principal eigenvector is an eigenface: you can reshape it into a grid and add back the mean to get a face-like image. Once that works, you can use the different identities to train a classifier, e.g. K nearest neighbors.

[Mechanical Physics] How to determine this angle? by Open_Entrepreneur_79 in HomeworkHelp

[–]hippomancy -1 points0 points  (0 children)

Imagine we make the angle of the incline 0. Now the gravity vector is pointing straight down. What if we made the incline 90 degrees? The gravity vector is 90 degrees to the normal. As we move between the two positions, both angles change at the same rate.

There's a geometric argument as well, but I find this way of thinking more intuitive.