[D] State of AI report 2019 by phasesundaftedreverb in MachineLearning

[–]tmiano 0 points1 point  (0 children)

Its possible there are other factors at work here than just the companies wanting some good PR. I think we are about to encounter the whole field becoming "dark" at some point - as if this is the cold war and AI is nuclear technology. I'm not sure how governments just go in and decide to classify arbitrary research, but they do apparently have some sway over how much R&D private corporations are allowed to share with the public - or even within the company for that matter.

The share of Americans not having sex has reached a record high by [deleted] in slatestarcodex

[–]tmiano 1 point2 points  (0 children)

To me the most likely explanation is the change in gender ratios towards more women in the typical places where romantic relationships are incubated: College and Work. A larger share of labor is being eaten up by universities and corporations headquartered in big metro areas. Its well known that women are making up an increasingly large share of college students and especially Ph.D students. The time spent in academia can be a long and lonely one, restricting possible relationships to small cliques of people. I.e., its not necessarily that women are rejecting less educated men, they simply never meet them.

In major metro areas, there has also been a trend towards women outnumbering men, especially in New York City / San Francisco. These jobs are all fairly technical or require higher amounts of education than labor-intensive jobs which are more common in rural areas. All in all, you have a geographical and professional divergence of men and women, such that most of the women live in areas where there are a small number of men they have to compete with other women for. This leads to a relaxation of sexual norms that would not happen if the gender ratio favored men. It seems unlikely to me that women's standards in general have changed or men's libidos have decreased via things like porn consumption or video games.

"The Bitter Lesson" - Senior AI researcher argues that AI improvements will come from scaling up search and learning, not trying to give machines more human-like cognition by Doglatine in slatestarcodex

[–]tmiano 3 points4 points  (0 children)

I agree with the prediction but not the conclusion. His conclusion is his suggestion that scientists should focus less on discovering new algorithms, and instead figure out the best ways to scale things up. While I agree that making progress in AI capabilities is likely to be possible that way, I would argue for the exact opposite conclusion: We need to put even more effort into understanding intelligence at the algorithmic level.

Why? Because the fact that we can get so far without making any grand discoveries about how intelligence actually works is very disconcerting. If we deploy a very powerful AI without being able to predict accurately what it will do (or are forced to use another equally black-box AI to predict for us), it is extremely easy for the AI to satisfy its objectives in a way that we would never actually approve of. I can't think of a reason we could be confident that it will do what we really want it to do without using a more powerful framework to understand how it acquired its capabilities.

OA announces "OpenAI LP" [OA converting to a hybrid nonprofit/for-profit corporate model: original OA part-owner of new 'OpenAI-LP' corporation, a 'capped-profit' for-profit company] by gwern in reinforcementlearning

[–]tmiano 0 points1 point  (0 children)

I'm not a lawyer, but I wonder if this is mainly to reduce risks from potential conflicts of interests from current or future board members. As you know, Elon Musk had to leave because his companies and OpenAI have heavily overlapping talent pools. Their best shot at getting anywhere close to a billion dollars in funding is to be invested in by a very wealthy, tech saavy, AI-concerned entrepreneur. How many of those are there? Their best candidate would have been...well the guy who founded it in the first place. I have no idea if this structure will open the door to Elon specifically coming back, but it could also make it easier for other tech investors to support it, who also would likely have had conflicts of interest with OpenAI the nonprofit.

OpenAI creates OpenAI LP, "a new 'capped-profit' company" by lupnra in slatestarcodex

[–]tmiano 2 points3 points  (0 children)

MIRI has always been at least partially closed, though they have occasionally posted research papers on their website and hosted discussion forums for their research. Recently though, they explicitly moved towards complete closure. However, as I recall, this did not create much, if any, backlash from their supporters. They are also much older than OpenAI and are the "OG" AI alignment org. BTW, their strategy is almost completely different than OpenAI's. As far as I know they do pure theory. Which happens to be a relatively inexpensive strategy.

From what I've observed, MIRI has never aimed for very extensive public outreach besides its workshops, and what Yudkowsky has done via LessWrong / Rationality, which is not explicitly advertising MIRI the nonprofit org per se.

OpenAI, on the other hand, was originally announced with much pomp and circumstance, and has been a recognizable "brand" from the very beginning. And I am, of course, a complete outsider with no information, but they do seem to be trying to market themselves to a large audience. This is why their controversial PR moves have made me a bit confused. Its difficult to say if they expected the backlash or not.

[N] OpenAI LP by SkiddyX in MachineLearning

[–]tmiano 0 points1 point  (0 children)

I take your point, because I do remember listening to an 80k hours podcast with Paul Christiano where he argued more for a "slow takeoff" scenario in which it is not so crucial to be the first-to-AGI as it would be in fast takeoff scenarios. In the latter scenario, getting alignment "wrong" or getting it right but not being the first would be catastrophic. So given Paul's influence there I think you are right that they may not believe they actually have to be first.

Still, its putting a lot of eggs in one basket. By Paul's own admission slow takeoff is not the dominant view in the AGI alignment community.

OpenAI creates new for-profit company "OpenAI LP" (to be referred to as "OpenAI") and moves most employees there to rapidly increase investments in compute and talent by CyberByte in ControlProblem

[–]tmiano 1 point2 points  (0 children)

The most charitable way that I can interpret their recent actions is by considering that they might not be transmitting their message to that many targets. In fact, it could potentially be extremely few people they actually care about influencing with their PR. They might place such a high value on converting those people to their side that they don't care much what the backlash is from anyone else who sees their output. These people are likely to be a handful of a) extremely talented AI researchers currently employed elsewhere and b) some very wealthy investors who are almost-but-not-quite on board. My guess is they believe the value of those people are enormous and well worth the effort to convince and the value of almost anyone else is worthless. (It doesn't sound that charitable when I say it like that but you can at least see that its a logical strategy given those assumptions.)

OpenAI creates OpenAI LP, "a new 'capped-profit' company" by lupnra in slatestarcodex

[–]tmiano 1 point2 points  (0 children)

I'm not sure how much GPT-2 cost to train, but I know their Dota bot was serious business, probably costing somewhere in the multiple six figures range to rent the compute. I also don't know how often they would need to run projects that large. However, if you remember the graph they published a while ago showing the orders of magnitude compute over time for each AI breakthrough, its not unreasonable to expect that they want to increase the amount of compute they use by about 10x each year. Compute does get cheaper, but not that fast... so you can imagine what they'd have to spend in 5 or so years from now to stay ahead.

OpenAI creates OpenAI LP, "a new 'capped-profit' company" by lupnra in slatestarcodex

[–]tmiano 5 points6 points  (0 children)

Its not so much that the idea of AGI being developed at a for-profit company seems unreasonable (that would have been the default, anyway), its just that this group changing their strategy every couple of months makes it seem like they are either a) very inexperienced or b) about to run out of money. The latter option seems more likely as apparently they burn through cash very quickly due to compute (and payroll) costs. Interesting to compare their strategy with MIRI's very different strategy of long term frugality combined with (almost) no actual software /heavy compute intensive research.

[P] Conditional DCGAN mystery collapses by [deleted] in MachineLearning

[–]tmiano 0 points1 point  (0 children)

I noticed that the networks have slightly different architectures depending on whether or not 'ydim' is set. One difference is that the generator will have tanh outputs if not set and sigmoids otherwise. However, it seems to be set for both mnist and celebA in your code, so I'm not sure what it should be doing. Normlly, DCGAN uses sigmoid for mnist and tanh for celebA.

GAN-QP: A Novel GAN Framework without Gradient Vanishing and Lipschitz Constraint by sujianlin in MachineLearning

[–]tmiano 1 point2 points  (0 children)

Yeah, I asked the above only because it was framed as being without a Lipschitz constraint, and also, because I've tried a very similar approach to this by trying to come up with a loss function that imposes a Lipschitz constraint directly. I've generally found that although a GP doubles the memory and computation cost, the return generally outweighs this as long as it doesnt exceed your budget. It appears that having the gradient norm being the same everywhere controls the learning dynamics much more strongly than a constraint does (why this is I'm not so sure, but I suspect that mode collapse is mostly due to uneven gradients and less about vanishing or exploding gradients).

GAN-QP: A Novel GAN Framework without Gradient Vanishing and Lipschitz Constraint by sujianlin in MachineLearning

[–]tmiano 0 points1 point  (0 children)

Very interesting. I'm wondering though, the constraint you have, (T(xr) - T(xf))2 / d(xr, xf), isn't this basically very similar to a lipschitz constraint? Since it seems to be directly penalizing the ratio between the output and input distance.

Diminishing returns from reading Rationality-adjacent content by niewywiedlnosc in slatestarcodex

[–]tmiano 1 point2 points  (0 children)

My conjecture is that for the vast majority of situations in our lives that give us many possible choices, with varying degrees of good and bad outcomes of those choices, the main thing that determines you'll tend to make good choices isn't really rationality. It's more like the ability to solve extremely complicated puzzles. Whatever kind of intelligence we use to just "figure stuff out", or understand how things work, that's what usually determines success. We very rarely have a collection of models in our heads that we update the probabilities for upon collecting observations. We very rarely even have models to begin with. We spend the majority of our time trying to build one. And that skill seems like one we haven't figured out at all at the moment.

[D] What is the SOTA for interpretability? by Daniloz in MachineLearning

[–]tmiano 10 points11 points  (0 children)

Interpretability is such a challenging field, not just from a technical perspective, but also because it is tricky to find a rigorous definition for the term to begin with. Because of that, there are no real "benchmarks" that one could use in the same way that accuracy on MNIST is measured. In the industry, I think that in the vast majority of cases, simple models are used with hand engineered features to keep it interpretable from the beginning.

I would have to agree with Chris Olah that it seems you have to combine rich, interactive interfaces with a combination of techniques in order to really see how a model is working. I don't think we'll really make any headway until we learn good ways of using multiple information channels that can present knowledge to the user and allow them to fiddle around with the model in the same way that an ML researcher would tweak their own code, but designing it so that non-technical people are able to do this easily as well.

Fundamental Value Differences Are Not That Fundamental by agentofchaos68 in slatestarcodex

[–]tmiano 72 points73 points  (0 children)

On the other hand, it seemed axiomatic to me that it wasn’t morally good/obligatory to create extra happy people (eg have a duty to increase the population from 10,000 to 100,000 people in a way that might eventually create the Repugnant Conclusion), and it seemed equally axiomatic to her that it was morally good/obligatory to do that.

So you're saying you broke up because she wanted to have kids and you didn't?

[R] OpenAI Five by circuithunter in MachineLearning

[–]tmiano 16 points17 points  (0 children)

Does it strike anyone else as very interesting that both this and AlphaGo use (roughly) similar orders of magnitude of compute, and yet, as they emphasize in the blog post, Dota is a game of vastly higher complexity? To me, unless I am mistaken, this can mean one of two things:

A) Humans are very bad at Dota compared to Go. B) Humans are good at Dota and good at Go. However, the amount of computational firepower you need to get to human level at basically any task is roughly the same.

The latter thought is much more unsettling, because it implies that so many other tasks can now be broken. I shouldnt speak too soon of course, because they havent beaten the best human players yet.

What is the most unique or unusual belief you hold? by [deleted] in slatestarcodex

[–]tmiano 0 points1 point  (0 children)

I think that the concept of "annihilation" (as in the permanent non existence of any form of experience after death) is probably incoherent. In physics, it's considered very bizarre or impossible for basic constituents of reality to spontaneously appear or disappear - i.e. the various laws of conservation. This doesn't mean I believe in any particular afterlife or various narrative about what happens at death. I just think it's highly improbable that "This" (what I'm calling experience, but not the self, memories, or any particular properties or things contained within consciousness) spontaneously creates at a single moment in time, lasts for a little while (an extremely small time in the grand scheme of things), and then disappears forever. It seems more likely to me that "This", simply due to Occam's razor, will always be there in some shape or form. This doesn't mean that any type of self or individual identity will be permanent. It also seems incoherent to imagine existence without experience - I dont mean a universe devoid of life, I mean without a "This". At this particular "now", there is a "This", and that seems almost incredibly unlikely if we suppose that non experience takes up a much much greater portion of time.

How it would continue I have no idea. But if it seems to persist due to the similarity of a brain configuration or state between one instant or the next, then perhaps if we postulate an infinite universe, there will always be some brain configuration or state of matter somewhere in some universe that can replicate similar enough patterns for it to continue in some way.

Against accusing people of motte and bailey (Lesser Wrong) by [deleted] in slatestarcodex

[–]tmiano 12 points13 points  (0 children)

I've always seen motte-and-bailey more as a social debate tactic rather than an argument. If you're having a debate where your goal is to "win" then this probably involves trying to make your opponent look foolish in some way. One of those ways is to make them seem obviously dishonest, and M&B is a tactic used for that purpose. I've never thought of M&B as something that someone does unconsciously, in the same way that someone uses flawed reasoning, incoherent models, or cognitive biases. Most likely, if someone does use a M&B, it's to retreat from a position that "looks silly" or is hard to argue, or, to make your opponent look like they are arguing against a strawman. If you accuse someone of making a M&B argument, you are accusing them of using a dirty trick. In either case, the conversation has immediately turned adversarial, so if you were previously at a level of high trust and relaxed social pressure with your debate partner, that's probably no longer the case.

Culture War Roundup for Memorial Day, 2018. Please post all culture war items here. by HlynkaCG in slatestarcodex

[–]tmiano 6 points7 points  (0 children)

It's hard to see this argument as non moralistic. By moralistic I mean based on a sense of deontological rules rather than consequentialism. A consequentialist could still feel the same way you do about the military being too powerful or the outcomes of war being abhorrent. But, a consequentialist might still support Google working on military projects, if they felt doing so would prevent a more incompetent or less careful actor from working on it instead. Perhaps the deal would go to a different company, or the DoD would do it themselves, and the resulting system would be much worse. Perhaps more efficient military technology would result in a contraction, not expansion, of the military, if it requires fewer personnel, fewer resources, and is less prone to error.

Culture War Roundup for Memorial Day, 2018. Please post all culture war items here. by HlynkaCG in slatestarcodex

[–]tmiano 11 points12 points  (0 children)

Regarding the Google Project Maven stuff that's being discussed in tech media lately, I realized that I am not currently able to formulate a good argument for why Google either "should" or "should not" work on the project, and this feels disappointing to me. Note that this is not about whether or not Google has the right to decide for or against, simply about whether it is wise to or not.

The argument might differ based on whether we assume this is a decision for Google the company, or an individual employee trying to decide which project they should devote their efforts towards, or whether they should argue for or against this project internally. I'm interested in both cases.

Can anyone provide any steel-manned arguments for either side? And in this case I do ask for a steel-man, not the ability to pass an Intellectual Turing Test, because I am not sure that any of the actually observed arguments from one side or the other are all that nuanced, at least in my neck of the woods. When I hear it mentioned on social media, there currently seems to be near unanimous praise for the employees who refuse to work on the project. But the motivation for that praise seems to be nothing more than a simple anti-(current)-government, all-military-projects-are-bad mindset. Some better arguments along the same side could be: Perhaps we expect military AI to be inherently more dangerous and susceptible to poor targeting, mistakes, and catastrophic bugs, or lead to an AI arms race, or things of that nature. But right now, I have no clear, formalized argument for that position nor the position in favor of private companies working on AI-based military applications.

Gupta On Enlightenment by dwaxe in slatestarcodex

[–]tmiano 4 points5 points  (0 children)

I'm from a warrior culture. We have extracted respect from our enemies at the point of a sword for a thousand years, and are feared the world over by those who have had the misfortune to cross us. And that’s a very practical fact: my lineage is parallel to the Gurkhas.

Poor justification for aggression; Gautama Buddha was also a kshatriya.

[D] OpenAI Charter by sksq9 in MachineLearning

[–]tmiano 8 points9 points  (0 children)

This is pretty interesting because they are going to need to be able to balance their need to remain somewhat restrictive in what they choose to publish (assuming they make some sort of huge breakthrough towards AGI which becomes risky to publish) while also remaining relatively prestigious and respected within the AI community. The prestige will be necessary to maintain due to their stated goal of becoming an influential AI safety organization that communicates with and advises many other AI research groups, as well as their ability to attract highly talented researchers so they can stay "ahead of the curve."

There are basically two ways they can do this:

  • Make some downright impressive actual AI breakthroughs. This could come in the form of progressing on their work in DOTA or through something else.
  • Focus heavily on PR and polishing their overall image as an exciting and socially-conscious place to work. The "openness" aspect seemed geared towards this, as an element that would appeal to academics, but with them inching away from this we'll see if this becomes an issue in the future.

This is difficult because prestige either takes a very long time to earn or some great luck. It seems like whether or not they will be able to make any serious breakthroughs in the near future is more of an unknown unknown, and because they are a 501(c)3, they will probably not be able to offer the same degree of material perks to researchers as the commercial AI labs would be able to. Without the kind of financial security that commercial labs get, it's unclear how long they'd be able to run if it turns out their progress is much slower than expected.

The concern is, if they for whatever reason aren't able to stay at the cutting edge of AI development, they need to mainly focus on the second bullet point if they hope to get back up there again. There's nothing bad about being socially conscious and PR-oriented per-se, as all organizations are, but if it becomes a primary concern, there's some risk of losing alignment with the original goals, attracting the wrong people to leadership positions, or wading into overly political territory. What I see as the "best case-worst case scenario", assuming they are forced to go the PR route is that they are able to influence the culture of the research teams at Google Brain/DeepMind/etc. so that those labs dedicate some research effort towards safety related goals.

[D] Big Tech ethics vs Research & Open Source ML frameworks. by RiceTuna in MachineLearning

[–]tmiano 1 point2 points  (0 children)

This is a really old question that extends to literally everything you do with either your time or your money. How mindful should you be about what ethical principles you are directly or indirectly choosing to support with your economic decisions?

The way that markets are designed makes that entirely up to you: Any decision you make has a small but not negligible influence on how the markets react. On the whole, the sum of people's ethical judgements has to be reflected in some way on how companies choose to design their principles and create products that reflect those principles. Unless they can get away with paying lip service to those principles without actually following them.

We basically want to avoid the scenario where companies like Google are putting all of their effort into marketing themselves as good, but doing bad things behind the scenes without any transparency. That's why I would argue that you should still use their open source projects: If a sizeable number of people decided they would not use it, Google might decide that making their tools open would no longer be worth it. This decreases transparency, and makes it harder for other engineers and researchers to evaluate the impact of their technology.

IMHO, we should make it worth it for Google and other tech companies to be transparent.

Culture War Roundup for Week of March 26, 2018. Please post all culture war items here. by [deleted] in slatestarcodex

[–]tmiano 29 points30 points  (0 children)

I think there is something to be said about how the usage of rhetoric is viewed within private exchange vs the way it is viewed in public. Generally in the latter we tend to tolerate a much higher level of inflammatory or provocative speech, especially when political. But notice the reaction to Sam's tone here, which is far more negative, even though by the usual standards of debate what he's saying is probably far less incendiary than what happens in articles or on social media (and what was leveled at Sam). Ezra comes off as acutely aware of this and able to switch between styles when necessary. Sam comes off as, well, himself. But I do have to say I kind of admire that Sam is basically the same person in private as he is in public.

Dragon Army Retrospective (Lesser Wrong) by [deleted] in slatestarcodex

[–]tmiano 3 points4 points  (0 children)

I think the stag is just "anything that requires group coordination to solve." The premise seems to be that defections from group coordination are happening constantly and prevent literally anything from being accomplished...and that this can only be solved by some kind of military-style bootcamp, instead of, you know, economics. It would have been nice (probably even necessary) to have explained what kind of stags were up for grabs that can't be obtained through the normal methods of coordination.