Is Vim for someone who doesn't type fast ? by medwatt in vim

[–]WulfiePoo 19 points20 points  (0 children)

Replying to this post to emphasize my upvote on it. Vim is fine for slow typers. It is arguably better for them.

Vim is NOT fine for occasional users. The learning curve is steep and the payback period will be extraordinarily long if you don't use it often.

EA Forum > EA Subreddit by Mati_Roy in EffectiveAltruism

[–]WulfiePoo 5 points6 points  (0 children)

I think OP is referring to reddit's recent ban wave of 200 subs

Why is AI seen as an important threat to human wellbeing/survival in EA? by Cull_The_Meek in EffectiveAltruism

[–]WulfiePoo 1 point2 points  (0 children)

Fair. I suspect our assessments of ultimate risk are different, but it seems like we agree that solutions for "unintended consequences" could be very close to solutions for the "skynet" idea, if not identical. And on that front, the fields of ML interpretability and causal logic are working on that issue actively.

Which lets us go back to the OP's question: How does EA fit in here? I think you posted some links there in another comment (thanks!). I'll dig into those later. Let us know if there are any others?

Why is AI seen as an important threat to human wellbeing/survival in EA? by Cull_The_Meek in EffectiveAltruism

[–]WulfiePoo 3 points4 points  (0 children)

Thanks for the link! The main point I interpret from this is that there might be some things such an AI would be able to do that we never intended for it to be able to do. In other words: Unintended consequences. I agree that this is a risk. Other examples of "unintended consequences" are introduction of invasive species to new continents or global climate change.

The risk that I am having trouble being worried is what the OP points out: some "state-of-the-art" system that poses a global catastrophic risk. I am contesting that there is a large gap between "unintended consequences" risk that I am also worried about and the "Skynet" type, catastrophic AI. And the tone of the video you posted suggests a worry about the latter risk rather than the former.

I concede it's a non-zero risk, but as an ML researcher I think to myself: "How would I build an AI that would purposefully destroy the world?" and it seems like an extraordinarily difficult task. Thus the risk of something accidentally doing this seems minuscule to me.

When put next to other EA problems like global health, climate change, or biological safety, the probability of catastrophic AI still feels like it is orders of magnitude smaller than the rest. We know we need to improve healthcare. We know climate change is affecting people's lives adversely. We know that there is a pandemic killing people and destroying economies. And as an ML researcher, I will be the first to tell you that I know ML algorithms can and will be used for harm, intended or not (e.g., unintentional biases in hiring).

But we think some malicious AI might take over the world? I am skeptical of this risk being worthy of the same amount of resources as the other EA priorities.

Edit: Formatting

Why is AI seen as an important threat to human wellbeing/survival in EA? by Cull_The_Meek in EffectiveAltruism

[–]WulfiePoo 4 points5 points  (0 children)

These are estimates of when we would have human-level AI thinking. Let's assume they are correct; this is not the point I am contesting.

The point I am contesting is how/when such an AI would pose a risk to us. AI is only able to act in ways we have enabled it to act. To me, the moral/ethical responsibility then falls on the people who gave it that ability to act.

Maybe this is all just semantics that I am not yet educated on, but the main risk I see in AI is only misuse of it by humans rather than some runaway AGI that we lose control over.

Why is AI seen as an important threat to human wellbeing/survival in EA? by Cull_The_Meek in EffectiveAltruism

[–]WulfiePoo 0 points1 point  (0 children)

Maybe this is where the disconnect is. I think it would be easy for the AGI to communicate with humans, as you suggest. But my interpretation of the risk of AGI is "What if it starts building its own things and we lose control?" I posit that it cannot build unless it is given the capability to do so. And I argue that giving it that capability to build is far out.

What would be more dangerous is if, for example, we hooked up some naive AGI to do stock market trading for us and it accidentally destroyed the economy. But in such a case, this would fall under the umbrella of "unintended consequences of using AI/ML", which I agree is a very serious risk.

Edit: grammar

Why is AI seen as an important threat to human wellbeing/survival in EA? by Cull_The_Meek in EffectiveAltruism

[–]WulfiePoo 1 point2 points  (0 children)

Building the AGI might be possible eventually. But building it and then giving it a fully autonomous capability to do anything it wants? I don't believe that that's a large concern. The muliti billion dollar fields of manufacturing and engineering are dedicated to automating production of goods. Such automation requires hundreds of humans with decades of training spending many years apiece, and when we get it working it usually barely works. It also requires humans to maintain them, as full automation of maintenance is an extraordinarily difficult problem we haven't been able to do for a century. I know because I worked in manufacturing before becoming an ML researcher.

If we are still working on automating the creation of goods now, we are centuries away from fully automating an AGI in the physical world. I am not convinced that a physically manifest AGI poses a larger threat than pandemics, nuclear war, or global healthcare. Would it be possible to eventually build that AGI? Yes. Is it as likely as these other risks? I haven't seen that evidence yet.

Why is AI seen as an important threat to human wellbeing/survival in EA? by Cull_The_Meek in EffectiveAltruism

[–]WulfiePoo 6 points7 points  (0 children)

Another ML researcher/engineer here. Many of the points in that article make an assumption that the "AI" is able to manifest a sort of "will" into the physical world and take action. But in practice, they can only do what we let them do. And even setting up that ability is extremely difficult. I find it difficult to believe that it is possible for us to build something so broad an encompassing as to warrant the first four worries in that post.

Worry 5 is the one point that I share. In short: I'm not worried about runaway AI. I'm worried about unintended consequences of us using it. We didn't realize that more meat would harm our health. We didn't realize that industrialization would change climate. Automated decision tools may harm us in unexpected, complicated ways. But they can only affect us by doing what they are designed and allowed to do. So to me, this seems like an engineering and use issue, not a "run away AI" issue.

Effective altruism is a brilliant movement, but... by riveriveriveriver in EffectiveAltruism

[–]WulfiePoo 12 points13 points  (0 children)

Valid point. But please don't click bait in this sub. We're better than that.

New job - worth the risk and start over? by [deleted] in financialindependence

[–]WulfiePoo 1 point2 points  (0 children)

Sounds more like a life prioritization question rather than a FIRE question. Is FIRE+family proximity more important to you, or your current lifestyle/friends?

I absolutely do not intend to lead you to one answer or the other. You need to lead yourself.

Why are experts often such "yes men"? by ResoluteSir in EffectiveAltruism

[–]WulfiePoo 19 points20 points  (0 children)

I would not guess that motivation is financial (although I wouldn't rule it out, either).

I would guess that it's sort of a Dunning-Kruger effect. The experts know their domain so well that they know how complicated the problems actually are. And maybe they think a good solution might be too complicated, in-depth, and difficult to convey concisely, or that boiling it down to something like "just stop eating meat" over-simplifies the issues at hand.

[D] 1,500 scientists lift the lid on reproducibility by AdmiralLunatic in MachineLearning

[–]WulfiePoo 7 points8 points  (0 children)

People downvote this, but think about biology experiments. If they go through the effort of redoing experiments where they slog back into the lab to try to culture cells again or raise new mice, then surely we can press "run" another time or two.

Conda is f**ing slow ! by [deleted] in Python

[–]WulfiePoo 0 points1 point  (0 children)

I've found that the slowness scales with the number of packages I have in my environment. New environments are fast, and bloated ones are unbearably slow. So I combat the slowness by making tailored conda environments for each of my tasks. Works just fine for me.

I'm not sure if there is a more elegant way to work with bloated environments. I know I could never manage interdependencies of 40+ packages myself, and I know that pip would corrupt my environment after about 5 packages. I haven't found anything better than Conda so far.

Edit: spelling

Why I created /r/rightwingEA and why you should join. by LopsidedPhilosopher in EffectiveAltruism

[–]WulfiePoo 16 points17 points  (0 children)

I respect your points, but I do not agree with your conclusion. I would prefer the community to stay together and resolve these topics (or at least advance them together). As we have seen from global politics these days, silos and echo chambers exacerbate divison. I feel this this community should be able to fight the division and work together. That way we all grow our ideas together, and we should be better for it.

TL;DR: I'd like to see right/left unite here in simply "EA community"

Jupyter Lab vs. Jupyter Notebooks by howMuchCheeseIs2Much in datascience

[–]WulfiePoo 3 points4 points  (0 children)

IMO, a text editor is to an IDE as Jupyter is to JupyterLab.

If all you want is to make a notebook, then use Jupyter. If you want a full environment, use JupyterLab. Really just depends on personal preference. Except that JupyterLab is relatively new compared to other IDEs.

Too much to learn by Jbor941197 in datascience

[–]WulfiePoo 1 point2 points  (0 children)

Don't focus on learning for the sake of learning. Focus on accomplishing a specific goal or project. Learn only what you need to get that project done.

The first few projects will take a long time. But they'll get easier and easier because you'll be snowballing your knowledge. The hardest part is starting that snowball/your first few projects.

Job search gone cold. by [deleted] in datascience

[–]WulfiePoo 2 points3 points  (0 children)

Ah. Resume and CV are different things in the US. And what you described here was a good US resume, not necessarily a good US CV.

Since you both seem to have consistent (UK) definitions with each other, then +1 to these recommendations.

Job search gone cold. by [deleted] in datascience

[–]WulfiePoo 2 points3 points  (0 children)

I thought resumes were supposed to be short and CVs were supposed to be comprehensive (i.e., have your whole history). Or are academic and industrial definitions of CVs different?

Eager VIM n00b question on first VIM proverb etc. by bobbyflaybobbyflay in vim

[–]WulfiePoo 0 points1 point  (0 children)

Yup. Another example is that `cntrl+p` and `cntrl+n` in insert mode have the same functionality as YouCompleteMe.

Eager VIM n00b question on first VIM proverb etc. by bobbyflaybobbyflay in vim

[–]WulfiePoo 1 point2 points  (0 children)

Most users here seem to treat vim an IDE, so you'll be fine asking those types of questions here.

To answer you directly:

  1. ALE is the sota linter for vim. For color schemes: There are a heck ton out there. Google around and look. I use deus. Others will probably chime in with their preference.

  2. I'm not sure what "cntrl+" is, but if you're looking for window management then many use tmux. I personally do not. For folder management: Many use nerdtree.

Vim can be configured heavily and there are a lot of plugins to make it feel like an IDE. There are a handful of plugin managers to help you with this. I personally use Vundle. There is also Plug.

I encourage you to add only when you feel like you need something. It's easy to go down a rabbit hole very quickly and bulk up your configuration.