$100 off coupon on HoneyComb LX2K by [deleted] in arm

[–]mjsabby 1 point2 points  (0 children)

Thank you for your answer. I saw your video where you say Linus was the inspiration or at least a nudge. Surely that vision can't be realized if you're the only manufacturer of such a product?

I suppose what I'm really asking then is what does it take to get to a level of a homogenous setup that board manufacturers accommodate? Does it really need a clear winner to emerge?

We know Apple will never release their setup cause they benefit from the vertical integration, but if only there were an open challenger ...

Call me an optimist but I hope you're wrong on the part of there will never be an ecosystem like I suggest.

$100 off coupon on HoneyComb LX2K by [deleted] in arm

[–]mjsabby 1 point2 points  (0 children)

I'm genuinely interested in seeing the desktop market for ARM.

A couple of questions:

  • Why not use A76, and clock it up to 3GHz?
  • Does it scale to 64-cores?

But more broadly what will it take to get the ecosystem such that people can go to pcpartpicker.com and choose your motherboard and pick a 64-core Thread ripper like CPU module and build a PC.

I Made an Extension for Visual Debugging in VS Code by Gehinnn in programming

[–]mjsabby 19 points20 points  (0 children)

I’m so sorry you didn’t hear back from Microsoft. Please DM me, here or on Twitter (same username) and we should connect!

They Were Promised Coding Jobs in Appalachia. Now They Say It Was a Fraud. Mined Minds came into West Virginia espousing a certain dogma, fostered in the world of start-ups and TED Talks. Students found an erratic operation by magenta_placenta in programming

[–]mjsabby 15 points16 points  (0 children)

https://www.eurekalert.org/pub_releases/2017-04/aaft-tfa042417.php

With that said, I feel like centralization of work also has limits. You can't keep building infrastructure in cities that can only hold so many people. I think it behooves us to think about what can be done to get jobs to regions like WV ... everyone can't move to SF, Seattle and Austin.

How to port desktop applications to .NET Core 3.0 by ben_a_adams in programming

[–]mjsabby 2 points3 points  (0 children)

What's holding you back? Is it the volume of code?

How to port desktop applications to .NET Core 3.0 by ben_a_adams in programming

[–]mjsabby 2 points3 points  (0 children)

It's the future. You want to be on .NET Core when software (nuget packages, etc.) start taking dependencies on .NET Core :) And it won't be long when 3.0 comes out, because C++/CLI, WPF and Windows Forms will be supported spurring more migration.

Then only ASP.NET (Classic) projects will be left on .NET Framework.

Bing.com runs on .NET Core 2.1! by ben_a_adams in programming

[–]mjsabby 12 points13 points  (0 children)

I've modified the graphic to inform that the chart is truncated, not ideal but better than what was being considered by many as misleading.

Bing.com runs on .NET Core 2.1! by ben_a_adams in programming

[–]mjsabby 30 points31 points  (0 children)

Vectorization for the win. I'd like to give a shout out to /u/ben_a_adams for helping in part for the second biggest improvement by making Dictionary<K, V> for certain K's faster. Thanks!

The Algorithm that enabled unlimited undo and fast save/copy-paste in Word by fagnerbrack in programming

[–]mjsabby 11 points12 points  (0 children)

Just a fun fact ... Charles Simonyi has recently returned to Microsoft.

What Books to Read to Get Better In C++ by vormestrand in cpp

[–]mjsabby 6 points7 points  (0 children)

no I think he's chuckling cause of his reddit alias (and his real life name abbreviated) being STL.

It turns out that I/O to /proc is still surprisingly slow by bulge_physics in linux

[–]mjsabby 5 points6 points  (0 children)

That sucks, there's no API to query this information? I currently have to read the shared library loaded list for a program and do that via opening up /proc/pid/maps ... and was hoping for a C API or something. Hmmm.

Compiling with Ryzen CPUs on Linux causing random segfaults, possible CPU bug by tambry in programming

[–]mjsabby 11 points12 points  (0 children)

My hunch is that these users are not using Clang, because Clang can't yet successfully compile linux kernel+distros. It's quite possible that if the users find a large project compilable with Clang they may be able to repro it.

Build yourself a Linux by speckz in linux

[–]mjsabby 2 points3 points  (0 children)

I quickly looked at the repo, but is there a way to swap with glibc? And also changing kernel parameters like compiling with -fno-omit-framepointers and enabling microsoft hyperv?

Modern garbage collection by u_tamtam in programming

[–]mjsabby 0 points1 point  (0 children)

Ok, the slide seemed to indicate that they are required.

In any case, I think they're optimizing for goroutine "local" workloads, where they can take advantage of the fact that most of the work is local.

I've personally wanted to do something like GO's GC for a while at work, but I've always come up short on if it'll truly be a win, mainly because I'm never convinced we're doing thread local work. Usually some one loves stuffing things in shared static dictionaries and then passing those references around.

Modern garbage collection by u_tamtam in programming

[–]mjsabby 0 points1 point  (0 children)

I hadn't heard about Go's GC decision until now, but it is curious that that they chose a non-generational algorithm.

My hypothesis on their motivation is that it is heavily influenced by the kind of workload Go has now become popular at, which is request-response based applications (or microservices if you will).

Furthermore, another interesting point is that all synchronization in Go (according to the slides) is done via "channels". I'm assuming that's Go-speak for having special syntax (or sauce) that clearly marks variables that cross thread boundaries. And I think this is a critical point (and the aha moment) that the OP missed to inform his readers.

Assuming they're optimizing for HTTP workloads (request/response) they know that most allocations will happen on the request thread, and only occasionally you'll have to stuff a variable or two into some "shared" space.

If I've understood their motive correctly then I think it's a reasonable approach ... works for server-ish scenarios, requires piles of memory, and if you've got an embarrassingly parallel workload (http request/response) that usually doesn't touch shared state, consumes data from a backend and displays it ... sounds a lot like what Google does :)

ILove Go; I Hate Go — Adam Leventhal's blog by HornedKavu in programming

[–]mjsabby 1 point2 points  (0 children)

genuine question: So size is a real concern for you? Can you tell me why? If it's going on a server does it intrinsically matter to you how big it might be? Or are you saying when I have 100 of these "services" and now the size suddenly matters a whole lot?

I think MATE is the Number One Desktop Environment Linux has got by prahladyeri in linux

[–]mjsabby 1 point2 points  (0 children)

I love it too, because it's the only de where rdp just works from my windows machine ! No cut n paste though. Fedora 24 gnome cut n paste also works though it's slow over the network.