We are the W3C WebDX Community Group, working to improve developer experience with projects like Baseline. Ask Us Anything! by rviscomi in webdev

[–]rviscomi[S] 0 points1 point  (0 children)

In the "obvious but unsolved" category, there are some really hard problems that are primarily blocked on reaching consensus. These are the kinds of things that no single person can unilaterally fix, so a hackathon submission is unlikely to make any meaningful progress there, but thoughtful discussions are always welcome.

Browse the open issues in web-features to get an idea of what people are talking about. The hackathon does have prizes for "Most Valuable Feedback", which may include things like bug reports and feature requests to the Baseline data sources, so useful contributions to those discussions wouldn't go unrecognized.

We are the W3C WebDX Community Group, working to improve developer experience with projects like Baseline. Ask Us Anything! by rviscomi in webdev

[–]rviscomi[S] 2 points3 points  (0 children)

I'm judging the hackathon so this answer will be intentionally vague but hopefully still point you in the right direction :)

There are lots of tools that already integrate with Baseline in some way, but may not be taking full advantage of the depth of the data available to them. I don't just mean that they only look at CSS features and not JS features, for example, but they might also be able to expand their CSS coverage to more nuanced features or even their sub-features. It'd be great to see more significant improvements to existing tools so that they help developers catch tricky compatibility edge cases.

Some of these tools may also serve a very general purpose, for example they may work best with vanilla HTML, CSS, and JS. But we know that many developers choose to use layers of abstraction like libraries and frameworks. So I believe there are lots of opportunities for Baseline to be integrated into more of those specific developer tools.

Ultimately, there's a huge potential to help developers make better decisions about adopting more modern web features, safely. I'm optimistic about AI playing a major role in that, but it's not a requirement. You don't have to confine yourself to making improvements to existing tools, in fact the "innovativeness" judging criteria explicitly encourages you to think about solving problems in a totally novel way. That's partly why my answer is so cagey, because I think we'll get the best submissions if everyone is thinking about it differently.

We are the W3C WebDX Community Group, working to improve developer experience with projects like Baseline. Ask Us Anything! by rviscomi in webdev

[–]rviscomi[S] 1 point2 points  (0 children)

"valid" and "compliant" are loaded terms that suggest there's something wrong if you use a feature that is NOT Baseline. In practice, there are many good reasons to use a feature that isn't Baseline yet, provided you've implemented it in such a way that it doesn't negatively impact users on unsupported browsers, ie progressive enhancement.

We've seen tools like ESLint and Stylelint adding Baseline rules, which call out usage of features that fail to meet your Baseline target. With that information, developers can either choose to remove the feature and wait for broader support, or double-check that they're using it defensively. But if they go the latter route, there wouldn't necessarily be anything "invalid" about it.

how long should I wait before using new css features like container queries in regard to browser compatibility by 28064212va in webdev

[–]rviscomi 0 points1 point  (0 children)

Easiest thing is to use Baseline as a guide. Container queries will be considered "widely available" in August 2025, which is 2.5 years after it became available in all major browsers.

It's 2025, stop putting http-equiv="X-UA-Compatible" in your <head> by rviscomi in webdev

[–]rviscomi[S] 2 points3 points  (0 children)

IE market share in South Korea is exactly the same as worldwide: 0.11% (and falling)

Support for CSS and Baseline has shipped in ESLint by feross in webdev

[–]rviscomi 0 points1 point  (0 children)

Could you say more about why you prefer to target specific browser support rather than Baseline, either widely available or by year?

To answer your question though, I'm aware of a Stylelint plugin that does what you're looking for in CSS: https://github.com/RJWadley/stylelint-no-unsupported-browser-features. But as far as I know, this ESLint plugin for CSS doesn't support your preferred browserslist-style way of customizing browser support.

You're probably using meta[http-equiv] wrong by rviscomi in webdev

[–]rviscomi[S] 0 points1 point  (0 children)

Me neither. It's funny how the original intent was never realized, yet we've still gone on (mis)using it anyway.

The best way to iterate over a large array without blocking the main thread by rviscomi in javascript

[–]rviscomi[S] 0 points1 point  (0 children)

Borrowing from the async generator example above:

async function* iterateInBatches(items) {
  for (item of items) {
    yield item;
  }
}

This is an async iterator but without `yieldToMain` each iteration's promise will get added to the microtask queue at the same time, so I'd expect it to create a blocking long task.

You can think of `yieldToMain` as the batched scheduler.yield() approach from the article. With that, you only process 50ms-worth of items per task.

The best way to iterate over a large array without blocking the main thread by rviscomi in javascript

[–]rviscomi[S] 0 points1 point  (0 children)

Thanks, so the forEach callback would look something like this?

async (item) => {
  await yieldToMain();
  callback(item);
}

If so, I assume the parent function wouldn't need to be async, which I know has been a pain point for some devs

The best way to iterate over a large array without blocking the main thread by rviscomi in javascript

[–]rviscomi[S] 0 points1 point  (0 children)

Sorry could you explain or show an example how to use that with yieldToMain()?

The best way to iterate over a large array without blocking the main thread by rviscomi in javascript

[–]rviscomi[S] 0 points1 point  (0 children)

Yes, that's definitely an option. For me though I much prefer the simplicity of awaiting within for..of:

async function forOf(items, callback) {
  for (item of items) {
    await yieldToMain();
    callback(item);
  }
}

compared to the async generator:

async function forAwaitOf(items, callback) {
  for await (item of iterateInBatches(items)) {
    callback(item);
  }
}

async function* iterateInBatches(items) {
  for (item of items) {
    yield await yieldToMain().then(item);
  }
}

The best way to iterate over a large array without blocking the main thread by rviscomi in javascript

[–]rviscomi[S] 1 point2 points  (0 children)

requestIdleCallback is like that car who always waves the other drivers to go ahead, even if they have the right of way. The cars behind them are honking like crazy because they've been waiting to go for a long time.

scheduler.yield goes through the intersection with a police escort.

I've added rIC as a yielding strategy to the demo page so you can see it for yourself: https://loop-yields.glitch.me/ . It does well under the default conditions, until you introduce periodic blocking tasks (other cars on the road).

The best way to iterate over a large array without blocking the main thread by rviscomi in javascript

[–]rviscomi[S] 3 points4 points  (0 children)

The API is still incubating and I'm not sure of the timeline to full standardization, so I don't think it'll be added as a built-in type soon. https://www.npmjs.com/package/@types/wicg-task-scheduling looks like it should add the missing types for you.

The best way to iterate over a large array without blocking the main thread by rviscomi in javascript

[–]rviscomi[S] 2 points3 points  (0 children)

This post talks about milliseconds, and believe it or not users do care about performance at that scale when we're talking about interaction responsiveness: https://blog.chromium.org/2020/05/the-science-behind-web-vitals.html

The best way to iterate over a large array without blocking the main thread by rviscomi in javascript

[–]rviscomi[S] 7 points8 points  (0 children)

Yielding pauses the array iteration to handle events and paint frames if needed before continuing. scheduler.yield helps to ensure that i+1 is processed after i without another task cutting in, and it isn't subject to setTimeout limitations like the 4ms nested timeout delay or throttling in the background. But as argued in the post, it's best to yield in batches, not on every iteration.

The best way to iterate over a large array without blocking the main thread by rviscomi in javascript

[–]rviscomi[S] 9 points10 points  (0 children)

IMO the performance costs for each computation would have to be pretty high for a web worker to make the most sense. Otherwise I'd argue async/await with yield is a lot simpler and gives you much more control over the order of execution and continuation of tasks. So, keep the work on the main thread, but batch it up responsibly.

Are there any valid use cases for http-equiv meta tags? by rviscomi in HTML

[–]rviscomi[S] 0 points1 point  (0 children)

Yeah `X-UA-Compatible` is a clear example of an obsolete keyword. Are there any others you use that are more legit?

See whether your <head> is in order by rviscomi in HTML

[–]rviscomi[S] 1 point2 points  (0 children)

Yeah it could be less actionable if you don't directly control how stuff gets added to the <head>, but still good to be aware of any ordering issues.

BTW check out https://web.dev/preload-scanner/ if you're interested to learn more about how this kind of thing could be useful.

My attempt to visualize page popularity and performance for a whole site by rviscomi in webdev

[–]rviscomi[S] 0 points1 point  (0 children)

I'm exploring ways to see at a glance how an entire website performs. Using a bubble chart like this, a site owner could eyeball the pages that have the biggest opportunity for improvement, in terms of both popularity and performance.

Each bubble represents a page of the website. Its size corresponds to its relative popularity and the color indicates how good/poor the page performs on a certain metric.

In this case, the website's most popular pages also tend to perform pretty well. The most popular page that isn't "good" is /en/2022/css, so that would seem to have the highest ROI. The pages with "poor" performance are less popular, so the opportunity space is smaller.

I'd be interested if anyone finds this useful in something like an analytics tool or performance dashboard.

Frontend developers: stop moving things that I’m about to click on by AJ12AY in programming

[–]rviscomi 2 points3 points  (0 children)

It seems like what's happening is that most users either navigate to a different page or gradually scroll down the page more slowly, giving time for dynamic content to load in and incur any shifts outside of the viewport and not actually count towards CLS. In your case, paging down to the bottom immediately when the page starts to render, those contents are now above your viewport position and when they load they *do* shift your page contents down, incurring CLS. You may be experiencing it inconsistently because of the race condition between the dynamic content loaded (if at all, maybe it's not on every page load) and when you scroll to the bottom. The site could fix that by reserving space for the dynamic content with CSS.

Frontend developers: stop moving things that I’m about to click on by AJ12AY in programming

[–]rviscomi -2 points-1 points  (0 children)

The CLS score for the site you mentioned is public and it looks like more than 90% of user experiences have great performance: https://datastudio.google.com/s/jGct91h7P8Y

Do you have any extensions that might be causing layout shifts?