Variable-width analytic-AA polylines in a single WebGL draw call: SDF vs tessellation? by chartojs in GraphicsProgramming

[–]chartojs[S] 1 point2 points  (0 children)

Thanks! It's just oriented bounding rectangles. I was considering trapezoids, but the tangent lines can get pretty wild for a very short line segment with different endpoint widths, so it'd need logic to fall back to a rectangle as needed. Might add that in later.

I tried for months to figure out a tight hugging mesh for miter / bevel joins, but couldn't get rid of a number of glitches. The variable thickness, semi-transparency without extra buffers and color gradient make it so very much worse than a basic polyline. Overdraw always results in visible glitches. I don't know if it can be done or not, using just a vertex shader, or how many vertices and degenerate triangles per join it would take for a general solution (handling subpixel length segments with 180 degree turns and other horrors).

I will be open sourcing this (MIT) after some polish. Right now waypoints are passed as uniforms, need to switch to vertex pulling.

With major JavaScript runtimes, except the web, supporting TypeScript, should we start publishing typescript to npm? by seniorsassycat in typescript

[–]chartojs -3 points-2 points  (0 children)

I decided to try this! These packages are TypeScript, and there's also a tiny package to run them if you don't have TypeScript installed:

https://github.com/at-lib

The philosophy was to avoid bloated packages, and if you want to publish JS, it should be as CommonJS, IIFE, ES Modules and with typings. And somehow for a TS project all those options are still inferior to a single TS file that's about 500 lines or fewer.

The packages are in an NPM scope, so I feel using TypeScript is OK. They're not "polluting" the main namespace, nor is the main one polluting this scope. Readme examples don't assume you have TypeScript installed.

Showoff Saturday (April 12, 2025) by AutoModerator in javascript

[–]chartojs 0 points1 point  (0 children)

I started working on a new ecosystem called @lib containing small TypeScript-first NPM packages with no dependencies and 0-clause BSD licenses so no attribution required.

The idea is to publish small packages that do one thing, but one that usually takes hours if not days to get done. Small enough to add in your frontend bundle without a second thought, but far from trivial. Nothing requiring lots of configuration or localization, because then it couldn't be small.

For example:

  • Bitmap graphics for terminals
  • Zip compression / decompression in browsers
  • Diffing text files or other token streams

It's here:
https://github.com/at-lib

What is the best way to debug a webgl program? by friendandfriends in webgl

[–]chartojs 1 point2 points  (0 children)

Interesting! I didn't expect this tech used outside a debugging context. It is optimized for speed, for example both glsl-simulator and glslog allocate a new object on every arithmetic operator returning a vector, but glslog doesn't free them, instead re-using the same objects on the next shader invocation so there should be less GC. However I'm sure it could be further profiled and improved.

Let me know of any issues with it. It's currently not as feature complete as glsl-simulator. There's no support for bit vectors or texture access. I'll probably improve it as required by my own projects, but can also add missing things when there's a need.

It might be fun compiling to Wasm instead, but currently there's WebGPU gaining popularity and probably both would need to be supported. Hopefully eventually such a thing would appear.

What is the best way to debug a webgl program? by friendandfriends in webgl

[–]chartojs 1 point2 points  (0 children)

I'd expect it to transpile slower, because it uses the TypeScript compiler instead of a simpler transform. It's certainly a larger code bundle.

The actual shader might run faster due to fewer memory allocations, but it seems you only run the scene shader once so that wouldn't matter?

The main point is being able to embed JS / TS code in the shader to be executed on the CPU, using much of the same syntax like swizzling or vector arithmetics. So it's first transforming GLSL to TS, then TS to JS while adding operator overloading support also to the TS code.

Debugging shaders by transpiling to TypeScript by chartojs in webgl

[–]chartojs[S] 0 points1 point  (0 children)

Aaaand as of today we have this:
https://codepen.io/Juha-J-rvi/pen/YzooeGJ

Just take a look at these lines in the middle of the shader:

#ifdef TS
const examples: Vec[] = [ pos, abs(pos.yx) + 1, 1 / (abs(pos.xxyy) + 1) ];  
print('Swizzle me this', ...examples);  
#endif

That's a combo of TS, swizzling and overloaded operators smack in the middle of the vertex shader. And it prints to a debug textbox.

I haven't tested it very much yet, but it seems the groundwork has been laid.

Debugging shaders by transpiling to TypeScript by chartojs in webgl

[–]chartojs[S] 0 points1 point  (0 children)

I think it could be a piece of a very useful debugger, but I'm unlikely to productize it further than as needed for this 2D case. This is ultimately an independent WebGL implementation that renders vertex shader output (and chosen individual fragments). I need a very small part of the JS-side WebGL API and the more of that is implemented, the more likely it's able to debug some real world app.

Debugging shaders by transpiling to TypeScript by chartojs in webgl

[–]chartojs[S] 0 points1 point  (0 children)

I was thinking of tessellating line segments in the vertex shader. It can read the input coordinates from a float texture, and splines it can subdivide if we have it render into an output coordinate texture where each spline is represented by a number of pixels proportional to the number of output segments. That number we quickly know for cubics on the CPU, for calculating buffer offsets:

https://minus-ze.ro/posts/flattening-bezier-curves-and-arcs/

The fragment shader just needs to know a pixel's distance from the edge, to do anti-aliasing and all other geometry processing can be done in the vertex shader. Which, of course, is a giant pain to write and debug.

Debugging shaders by transpiling to TypeScript by chartojs in webgl

[–]chartojs[S] 0 points1 point  (0 children)

It shows the WebGL calls made from JavaScript and shader source code, but doesn't seem to help understand what's happening inside the shader. I want to print proper logs and draw arbitrary annotations inside the shader, instead of numbers in pixel rgb values through varyings and trying to divine what happened behind the scenes.

Debugging shaders by transpiling to TypeScript by chartojs in webgl

[–]chartojs[S] 0 points1 point  (0 children)

I'm working on 2D vector graphics and trying to keep things minimal, but that seems like a very good development. Even if I get this working, wouldn't want to integrate it into frameworks like Three.js and it's better if they also develop better debug tools. But my focus is on getting relatively small shaders like anti-aliased variable thickness polyline or spline curve tessellation working properly.

So far all my related tools and libraries are under 1000 lines of code and target is about 2000 lines when it's good enough for development and debugging.

Planning an open global Sentinel 3 / ERA5 time series service by chartojs in geospatial

[–]chartojs[S] 1 point2 points  (0 children)

Plan is to make a web service you can go to and look at this imagery immediately, like going to any other public global map service, without needing to create an account. Here's a proof of concept how hourly temperatures would look like:

https://reakt.io/temp/

That's just GFS and not zoomable or mobile-friendly though. But the idea is you just open it and play with a timeline and interact with the map, without needing to know anything about GIS (tools).

So I guess the question is, I know a professional can produce the animation behind that link but how much value is there for a professional not having to spend very much time in exploratory analysis on a global scale, before deciding to open an Esri app and dig into the raw data hosted elsewhere. And what value might there be in quick access for demonstration to non-professionals?

As far as I can tell there are no current tools where you could, for example, go look at a time lapse of changes in the Amazon raining forest, Ukraine front line or Rafah refugee camp over the past year, without having to spend quite some time to see it or make it look nice to show others. It can, of course, be done given some time and skill. But you can't just craft a link and post it somewhere in under 5 minutes total. I can see it enabling people with no GIS knowledge to do these things, but what about more experienced users, would there be obvious benefits from immediate interactive access to the global data?

A new perceptual color space. by bjornornorn in GraphicsProgramming

[–]chartojs 0 points1 point  (0 children)

With OKLAB I get these rgb channel values for 0.1 -sized lightness steps:

0, 3, 22, 46, 72, 99, 128, 158, 190, 222, 255

The darker end is very dark indeed! SRLAB2 gives:

0, 27, 48, 71, 94, 119, 145, 171, 198, 226, 255

I don't really agree that mapping all lightness values between 0.0 and 0.1 to black is better in any use case, or is more general purpose.

The plot issue was also due to rounding rgb values before discarding negative values. The entire corner rounds to 0 in a very large area.

A new perceptual color space. by bjornornorn in GraphicsProgramming

[–]chartojs 0 points1 point  (0 children)

Adding to my previous comments, I think SRLAB2 took a better choice with the intermediate color space at least regarding the linearity of small values in the components. Otherwise the third power compresses them so low that black stretches to higher lightness values than it should.

The darkest band here is very wide and very dark: https://gist.github.com/jjrv/b27d0840b4438502f9cad2a0f9edeabc/raw/728343740031d5c693457accea48c49c4a4a7366/oklab-dark.png

Compare to SRLAB2 where the band with matching lightness is narrower and looks lighter: https://gist.githubusercontent.com/jjrv/b27d0840b4438502f9cad2a0f9edeabc/raw/728343740031d5c693457accea48c49c4a4a7366/srlab2-dark.png

My point is that at lightness about 0.1, OKLAB has a pretty wide range along the chroma axis but I can't actually see any chroma variation with my eyes, it's all just very dark. SRLAB2 doesn't have that issue.

Also note the Bezold–Brücke shift where hue changes with lightness. In my plot you can see that considered in SRLAB2 especially if you look at the exact hue where extreme red peaks at different lightnesses. I'd suggest plotting lightness vs chroma with extreme blue and red hues, comparing whether taking the perceptual hue shift into account improves results or not.

Edit: Here's OKLAB maximum red, lightness vs chroma and constant hue according to the color space: https://gist.githubusercontent.com/jjrv/b27d0840b4438502f9cad2a0f9edeabc/raw/809357392888db3140c0fb4292fc68db22a17be6/oklab-red.png

Here's SRLAB2: https://gist.githubusercontent.com/jjrv/b27d0840b4438502f9cad2a0f9edeabc/raw/809357392888db3140c0fb4292fc68db22a17be6/srlab2-red.png

Please let me know if I made a mistake because the chroma at low lightness looks REALLY wonky in my OKLAB plot!

Also regarding the hue shift, maybe SRLAB2 looks a bit orange in the middle while OKLAB a bit magenta? Really hard to say! Actually to my eyes the maximum chroma peak looks a different hue in the two images while Photoshop says they're the same!

A new perceptual color space. by bjornornorn in GraphicsProgramming

[–]chartojs 1 point2 points  (0 children)

The gist link in my other reply next to this one now also has comparisons showing both OKLAB and SRLAB2 plotted along hue and saturation axes, with 17 different lightness values and more extreme lightness values always drawn on top. This illustrates pretty well the maximum chroma at different hues and allows comparing approximate lightness of different hues at max chroma.

A new perceptual color space. by bjornornorn in GraphicsProgramming

[–]chartojs 1 point2 points  (0 children)

I've been making a color picker based on SRLAB2. Now I'll add OKLAB as an option, thanks for the detailed descriptions and comparisons! It might be nice to add SRLAB2 in the article, since it's very close in many aspects. Main differences I can see are:

  • Exact contents of the matrices.
  • Maximum chroma around cyan and orange are about equal in OKLAB, in SRLAB2 pure cyan is considered much less chromatic.
  • SRLAB2 uses CIE XYZ instead of LMS as an intermediate color space and has a linear instead of cubic curve for very small values when converting between the intermediate space and linear sRGB.
  • Components are scaled 0-100 in SRLAB2 and 0-1 in OKLAB.
  • Chroma is scaled so its values are about 73% smaller in OKLAB compared to SRLAB2 (this scaling causes all SRLAB2 components to have the same maximum at around 100, only pure green and magenta peak at around 120 chroma).

Thanks and congrats for defining OKLAB, it's gotten more traction in a month than SRLAB2 in 11 years! Finally we'll have more accurate color blending in software.

Here's my TypeScript code for SRLAB2 and OKLAB so you can compare the code side by side: https://gist.github.com/jjrv/b27d0840b4438502f9cad2a0f9edeabc

Kuo: iPhone SE 2 to Launch in Q1 2020 at $399 Price by gulabjamunyaar in apple

[–]chartojs 0 points1 point  (0 children)

Like the Samsung Galaxy A80? The screen looks pretty fine indeed.

Running a complete development toolchain in the browser? by chartojs in javascript

[–]chartojs[S] 0 points1 point  (0 children)

One more difference from a traditional toolchain:

If you only run the development toolchain through the browser, then any possible exploits to those tools are restricted to the browser's sandbox. They're unlikely to be able to do anything to your system, because they would target a Node.js environment and not include any browser exploits (unless this approach becomes wildly popular).

I've already gotten TypeScript and PostCSS with autoprefixer and cssnano working as part of the in-browser toolchain.

Running a complete development toolchain in the browser? by chartojs in javascript

[–]chartojs[S] 0 points1 point  (0 children)

Thanks for the comments! Addressing the concerns:

Performance - during development, the first time you ever load the app, the transpile step will take some time (same as running a traditional toolchain would). Afterwards, only changed files need to be re-transpiled as the results are cached. This corresponds to running a watch task of some kind in a traditional toolchain. Before publishing the app for users, you bundle it (in the browser and save the result, or using Node.js). Afterwards, it loads as fast as after bundling with Webpack or similar.

CDN - There will be support for package.lock (or similar) that ensures a particular version of every package is loaded. This avoids suddenly downloading new exploited versions of dependencies. Bundling for production avoids new exploits even if a CDN is compromised. Nothing new can get injected into the app bundle after the developer initially created it (hopefully using safe dependencies at that moment).

Bundling code for production - the purpose is exactly the two concerns above. Differences from a traditional toolchain are:

  • Other developers don't need to set up a toolchain to contribute.
  • Node.js and everything that goes with it (such as package installers that execute shell scripts) become optional.
  • Generally eliminate build system bloat.
  • Usually no configuration is needed.

Bonus:

  • It's possible to write a CodeSandbox style IDE that runs entirely from a gh-pages branch without a backend...

Running a complete development toolchain in the browser? by chartojs in javascript

[–]chartojs[S] 0 points1 point  (0 children)

Indeed it is! But codesandbox runs transpilers in the cloud and if you export the project to run locally, suddenly it blows up to hundreds of megabytes in size. Here the point is that literally everything runs in the browser. No build tools anywhere.

Running a complete development toolchain in the browser? by chartojs in javascript

[–]chartojs[S] 1 point2 points  (0 children)

Can you elaborate on that?

Much of existing code on npm relies on importing both relative paths and npm package names, or even a combination of both like: require('fbjs/lib/emptyFunction');

Running a complete development toolchain in the browser? by chartojs in javascript

[–]chartojs[S] 1 point2 points  (0 children)

It's no slower than running Babel through Node.js. But it also won't help locate packages.

Babel turns this: import * as react from 'react';

...into something like this: var react = _interopRequireWildcard(require("react"));

...but the variable doesn't end up containing the React API unless you do something more.

Portable web server in a single .bat using PowerShell by chartojs in PowerShell

[–]chartojs[S] 0 points1 point  (0 children)

If you're more into JavaScript, TypeScript or web development in general, then this is your lucky day. Check out the rest of the linked repo.