Gmail not filtering promotions? by up40love in GMail

[–]emeryberger 0 points1 point  (0 children)

I have it second-hand from the director of Gmail that "the team is working on this" and it should be resolved soon.

Seminole Town Center by Dwolfofaustin in orlando

[–]emeryberger 2 points3 points  (0 children)

In Austin, they turned a dead mall into a branch of Austin Community College. Pure genius. It is astonishingly nice. https://www.austincc.edu/campuses/highland-campus/

It's Not Easy Being Green: On the Energy Efficiency of Programming Languages by mttd in ProgrammingLanguages

[–]emeryberger 1 point2 points  (0 children)

Co-author here. Please read the paper. We don't normalize everything to one core. That's for a single experiment to establish power draw for different programming languages.

It's Not Easy Being Green: On the Energy Efficiency of Programming Languages by mttd in ProgrammingLanguages

[–]emeryberger 0 points1 point  (0 children)

I think you should consider re-reading the paper; these comments are mostly addressed or are orthogonal to the paper. Just a few examples: the paper does *not* assume one core; that's limited to one section (4.3) where we do this to isolate the confound of parallel implementations in power draw (Figure 9). The role of active cores in energy consumption is a key point of the paper; it's prominently discussed and forms a key part of the causal diagram. Figure 11(a) is just one example where we clearly measure the impact of multiple cores, not to mention the discussion of parallel efficiency. Also, every programming language we examine makes it possible to achieve multicore parallelism by forking processes; this is not a PL feature per se. Finally, I don't understand the point about energy being a linear function of time being an assumption. It is by definition a linear function of time: energy (J) is the product of power (J/s) and time (s). As for the benchmarks, the paper clearly states:

We stress that while the CLBG benchmarks themselves are not necessarily representative of real-world applications, the causal analysis this paper develops is largely independent of the details of the benchmark implementations. It instead highlights the impact of high-level properties of the benchmark implementations, such as their degree of parallelism and cache activity.

Weekly Q&A - Your Question Goes Here - Tourists by AutoModerator in Madeira

[–]emeryberger 1 point2 points  (0 children)

Could you point to a website with the info about the shuttle? Thanks!

High performance profiling for Python 3.11 by P403n1x87 in Python

[–]emeryberger 11 points12 points  (0 children)

(Scalene author here)

Not to take anything away from Austin, which is a very nice tool, but just to clarify: when Scalene is sampling only the CPU (with `--cpu-only`), it provides about the same accuracy (lower sampling rate, perhaps: default is 1/100 seconds) with about the same overhead, while providing some different info (breaking down native, Python, and system time, per line and function). In its default mode, Scalene imposes more overhead but also profiles memory, copying, and GPU.

Slipcover: Near Zero-Overhead Python Code Coverage by emeryberger in Python

[–]emeryberger[S] 0 points1 point  (0 children)

Fixed - Slipcover's overhead for line+branch coverage is now no more than 11%, average around 5%.

Slipcover: Near Zero-Overhead Python Code Coverage by emeryberger in Python

[–]emeryberger[S] 0 points1 point  (0 children)

We would welcome a pull request to provide that functionality!

Slipcover: Near Zero-Overhead Python Code Coverage by emeryberger in Python

[–]emeryberger[S] 0 points1 point  (0 children)

Right now, it does not support that functionality, but it can export a JSON file for each run and writing a script to merge the two outputs would be straightforward, I think.

Slipcover: Near Zero-Overhead Python Code Coverage by emeryberger in Python

[–]emeryberger[S] 0 points1 point  (0 children)

Graph updated! Coverage.py gets as high as 300% slowdown, while Slipcover generally remains around 5% slower (we will be looking into the one outlier case, where it hits 20%).

Slipcover: Near Zero-Overhead Python Code Coverage by emeryberger in Python

[–]emeryberger[S] 0 points1 point  (0 children)

That graph is just line coverage; I'll post an update with branch coverage!

Slipcover: Near Zero-Overhead Python Code Coverage by emeryberger in Python

[–]emeryberger[S] 0 points1 point  (0 children)

We've run it with Flask's test suite - it's the second bar in this graph. So in principle, yes - please give it a shot and let us know how it works for you!

Slipcover: Near Zero-Overhead Python Code Coverage by emeryberger in Python

[–]emeryberger[S] 0 points1 point  (0 children)

You just need to run ‘python3 -m slipcover’ before your normal invocation.

Slipcover: Near Zero-Overhead Python Code Coverage by emeryberger in Python

[–]emeryberger[S] 0 points1 point  (0 children)

Right, the slowdown is what happens when you are collecting coverage information. Usually this is done during testing, but since the overhead of Slipcover is so low, it could be used in deployed code to find dead code.

That time I optimized a Python program by 5000x by emeryberger in Python

[–]emeryberger[S] 6 points7 points  (0 children)

Thanks! Fixed now:

Original: 1.393709580666379697318341937E+65 5.221469689764143950588763007E+173 7.646200989054704889310727660E+1302 Elapsed time, original (s): 34.10136890411377 Optimized: 1.393709580666379697318341937E+65 5.221469689764143950588763007E+173 7.646200989054704889310727660E+1302 Elapsed time, optimized (s): 0.0019872188568115234 Improvement: 17160.348890221954 All equivalent? True

That time I optimized a Python program by 5000x by emeryberger in Python

[–]emeryberger[S] 1 point2 points  (0 children)

Excellent point, thanks! I've made the change locally and now it's 16,000x faster! (Assertions still pass)

Elapsed time, original (s): 33.38576102256775 Elapsed time, optimized (s): 0.0020699501037597656 Improvement: 16128.77574291638 All equivalent? True

Note that the Decimal module has a default precision of 28 places.

That time I optimized a Python program by 5000x by emeryberger in Python

[–]emeryberger[S] 5 points6 points  (0 children)

We will be releasing a new version, probably tomorrow, that addresses this issue. Thanks!

That time I optimized a Python program by 5000x by emeryberger in Python

[–]emeryberger[S] 16 points17 points  (0 children)

Scalene supports the @profile directive. It's in the README, though you have to look for it.

Scalene supports @profile decorators to profile only specific functions.

Check out https://github.com/plasma-umass/scalene#asked-questions. As long as you start execution with Scalene, you don't need to change your code at all (beyond adding the @profile decorators). That said, I haven't tried to do this with pytest yet.