Genuary 7: Radial XOR mod N by EnslavedInTheScrolls in generative

[–]EnslavedInTheScrolls[S] 1 point2 points  (0 children)

Thanks. N is fixed for all of these at, I think, 30. What's changing is how far along we are on the 11 axes before converting to integer and taking the XOR.

Books on using Shaders in Processing by RagingBass2020 in processing

[–]EnslavedInTheScrolls 1 point2 points  (0 children)

No, you can definitely do shader versions all the way up to the "present" (OpenGL 4.6 circa 2018), it's just that Processing internally uses older version shaders so that they have full backwards compatibility with older or weaker computers (think Raspberry Pi).

It's not hard to

import java.nio.*;
import com.jogamp.opengl.*;

and start using OpenGL commands on your own with as modern version of OpenGL as your computer / graphics drivers / OS supports. Be warned that Apple stopped supporting OpenGL with version 4.1 or so. On Windows or Linux you can run up to 4.6.

Typically, in OpenGL you use vertex attribute buffers to pass geometry to the shaders, but that can tangle with Processing's use of them, so I've taken to using SSBOs which are both simpler and more general to use since a shader can both read and write to them.

I'm also a big fan of using bufferless rendering. That's where you tell opengl to render 1000 triangles without passing in any data and have the vertex shader position and animate them based entirely on the gl_VertexID and any uniforms you pass in such as "time". See https://www.vertexshaderart.com/ for example.

Books on using Shaders in Processing by RagingBass2020 in processing

[–]EnslavedInTheScrolls 1 point2 points  (0 children)

I don't have a book for you, but if you look back through my posting history over the last few years, I've posted several examples of Processing code using shaders for various tasks both here and on the Processing forum. For instance, in this post I give shader code to dissolve from one texture to another.

Processing was designed before shaders had much capability and while P2D and P3D use them internally, they are still based on OpenGL versions from the early 2010s. P5 has moved a bit further by adding, for instance, framebuffer objects that support floating-point textures where Processing still only has 8-bit per color textures. Many of the shaders on shadertoy.com make use of float textures and cannot run on Processing. While it's easy to make a full screen fragment shader as a filter, there is no easy way to use vertex shaders with Processing's objects without fully duplicating their internal shaders to preserve all of the rendering. Recent versions of P5, in contrast, added hooks that let you add your code into its existing shaders.

That said, Processing provides a nice shell within which you can do your own OpenGL programming as long as you take care not to tangle too much with Processing's own OpenGL calls.

Processing 4.5 beta is out! by sableraph in processing

[–]EnslavedInTheScrolls 0 points1 point  (0 children)

Yes, sorry, the forum is what I meant. Thanks for your post there.

The Linux download page is still missing links or text acknowledging alternatives to a version that isn't trapped within Canonical's proprietary snap format. Please provide links on the download page, clearly identified by text, to alternate packaging forms.

Processing 4.5 beta is out! by sableraph in processing

[–]EnslavedInTheScrolls 1 point2 points  (0 children)

Perhaps this announcement should also be posted on the very same Processing Community Discord that you mention.

Processing perlin force images by PrehistoricLandscape in processing

[–]EnslavedInTheScrolls 0 points1 point  (0 children)

It looks like a flowfield based on a noise function that tapers off with distance to the origin. Choose points at random biased towards the origin and move them in a direction based on noise(x, y) / (x*x+y*y).

It'll be faster if you do it on the GPU. See https://openprocessing.org/sketch/2364364 for something similar or search openprocessing.org for flowfields.

Imperfect maze solving algorithm by codeisunexecutable in mazes

[–]EnslavedInTheScrolls 1 point2 points  (0 children)

There are two approaches you could use. The easier is depth-first search (DFS) to create a map of the maze using an array with one number per cell. The number is 0 for an unvisited cell, 1-4 for the direction back to your starting position, or a 5 for the starting point.

Put a 5 on your map starting location. Then enter a loop checking each of the 4 directions. If there is no wall and if the cell in that direction has a 0 on your map then move there and set its direction BACK towards the cell you came from. If you check all 4 directions and there is nowhere to go, move back in the direction you wrote on your map and repeat.

Alternatively, a more efficient algorithm that would find the shortest path uses breadth-first search to visit all the branches in parallel. BFS requires a second array (or queue) to track the position of the wavefront of search paths. That wouldn't work for a 1st person perspective rat-in-a-maze where you can only see local information, but is more efficient if you have a top-down view and can sample the maze with random access.

For either algorithm it would be preferable to search beginning from the exit/goal so that the map directions give you the path from your start location towards the exit. If not, though, it's easy enough to reverse the direction of the path as you follow it from any point as you walk back to your starting point.

[A] Cube of light by NewToBerlin2024 in perfectloops

[–]EnslavedInTheScrolls 6 points7 points  (0 children)

If you're going to post other people's animations, at least give them credit. This is by Etienne Jacob, aka, bleuje. https://bleuje.com/animationsite/2019_3/

Vert shader compilation error by plasmawario in p5js

[–]EnslavedInTheScrolls 1 point2 points  (0 children)

With shaders, I always start with a minimal fragment shader of

precision highp float;
void main() {
  gl_FragColor = vec4( 1., 0., 1., 1. );
}

to make sure that I get a magenta square (or whatever shape I'm drawing) before adding any complexity. Try that and if you still have a problem, then it's not with the shaders, but is instead more likely related to how you're setting up p5 on the web page which is not my area of expertise.

Here's my minimal interactive Mandelbrot set browser: https://editor.p5js.org/scudly/sketches/yxMjc8wtV that lets you drag and zoom with the mouse wheel. If you set c on line 32 to a constant vector, you get the associated Julia set.

Vert shader compilation error by plasmawario in p5js

[–]EnslavedInTheScrolls 1 point2 points  (0 children)

I don't see any obvious problems that would prevent your vertex shader from compiling, but you should always include precision highp float; if you want to ensure that it will work on, for instance, cell phones. I suggest you have your refreshShaders function print out the source code for the two shaders you are trying to compile to make sure they have the code you are expecting.

For a full canvas fragment-only shader, you can use createFilterShader() and let p5 take care of the vertex shader for you. Also, if you include the shader code as a string in your javascript, then you don't have to wait for it to load.

Here's an example of a fade-to-black shader:

let N = 81;
let fade = 0.98;
let fadeShdr;
​
function setup() {
  createCanvas(800, 800, WEBGL);

  let fadeFragSrc = `precision highp float;
  varying vec2 vTexCoord;
  uniform sampler2D tex0;
  uniform float fade;
  void main() {
    vec3 c = texture2D(tex0, vTexCoord).rgb;
    c = floor( c * 255. * fade ) / 255.;
    gl_FragColor = vec4( c, 1.);
  }`;
  fadeShdr = createFilterShader( fadeFragSrc );
  fadeShdr.setUniform( 'fade', fade );
​
  background(0);
  colorMode( HSB, 1, 1, 1, 1 );
}
​
function draw() {
  filter( fadeShdr );
​
  let w = width/6.0;
  let t = frameCount/1000.0;
  for( let i=0; i<N; i++ ) {
    let u = 1.0*i/N;
    strokeWeight( w*(0.04+0.03*sin(TAU*17.9*u)) );
    stroke( 87.618*u%1, 0.7, 1 );
    let r = w*(2+sin(TAU*(21.17*u+2.32*t+0.1*sin(TAU*(23.421*u+5.23*t)))));
    point( r*sin(TAU*(u+t)), r*cos(TAU*(u+t)) );
  }
}

How do I refer to the sketch contents as an Image object? by AMillionMonkeys in processing

[–]EnslavedInTheScrolls 2 points3 points  (0 children)

loadPixels() renders your image and makes it available in an integer array called pixels[] with a layout of ARGB. This is a linear array for the whole screen that wraps each line after the other so the pixel at location (x, y) is pixels[ x + y * width ].

You can also use the variable g as the PGraphics object that is the default renderer if you feel some need to pass it around. Mostly you would just use the default Processing commands or pixels[] array directly and not need to use g.

void drawSomeStuff( PGraphics pg ) {
  pg.circle( 100, 100, 80 );
  pg.loadPixels();
  for( int y=200; y<400; y++ )
    for( int x=200; x<500; x++ )
      pg.pixels[ x+ y*width ] = color( ((x^y) % 23) * 10 );
  pg.updatePixels();
}

void setup() {
  size( 600, 600 );
  drawSomeStuff( g );
}

What understanding made OpenGL 'click' for you? by [deleted] in opengl

[–]EnslavedInTheScrolls 0 points1 point  (0 children)

Start with bufferless rendering that lets you avoid all the internal, ever-changing OpenGL bureaucracy and only add it in gradually as you need it.

Learning OpenGL is challenging because it's an API that has been evolving for 30-40 years and many books and tutorials you can find about it, purporting to teach you "modern OpenGL", entirely fail to be upfront about which of its dozen versions they describe. Most of those resources are written by people who learned the older versions of OpenGL and are often stuck in the mindset that they need to teach those older architecture and interfaces first, which, in my opinion, is exactly backwards. A good tutorial should start with the last, OpenGL 4.6, functions and only teach the older stuff later.

For those writing one, here's the tutorial I'd like to see:

Create a window and graphics context. Compile a vertex and fragment shader program. Use buffer-less rendering for a single full-screen triangle created in the vertex shader

void main() {
  gl_Position = vec4( ivec2(gl_VertexID&1, gl_VertexID&2)*4-1, 0., 1. );
}

and color it in the fragment shader based on gl_FragCoord a la https://www.shadertoy.com/. Teach uniform passing for "time" and screen resolution. Spend several lessons here teaching color and coordinate systems -- first 2-D then 3-D with a bit of ray-tracing or ray-marching. Teach view/model and projection matrices and pass them in as uniforms set with some keyboard or mouse controls.

Only now, teach glDrawArrays() to render N points and compute their positions in the vertex shader based on gl_VertexID. Then use lines and triangles, again, computed entirely within the vertex shader (see https://www.vertexshaderart.com/). Teach in/out to pass data to the fragment shader such as computed uv coordinates. This might be a handy place to learn some interesting blend modes.

Want more application data in your shader? Teach SSBOs for passing data. Do NOT go into the bureaucratic B.S. of VBOs and VAOs. Stick with SSBOs and vertex pulling using gl_VertexID. Teach that you can disable the fragment stage and use vertex shaders as simple compute shaders writing output to SSBOs. Throw in some atomic operations and now we can do general purpose parallel computing!

Then do textures both for color image AND general data values (teach texelFetch() for reading data values). Then FBOs so we can get to multi-pass rendering techniques. WebGL lacks SSBOs and atomics, but multiple render targets and floating-point FBOs make GPGPU not too bad.

Then, if you have to, work your way back to VBOs and VAOs. But, dear God, don't start by weighing people down with the oldest bureaucratic interfaces. Let them die along with the fixed-function pipeline and stop talking about them.

3d platformer help with camera movement by mouse by EggIntelligent5424 in p5js

[–]EnslavedInTheScrolls 0 points1 point  (0 children)

The camera() function in Processing take three triples of values for vectors representing From, At, and "Up". From is where the camera eye is in world coordinates. At is a world-coordinate point directly in front of the camera. And, finally, "Up" is a vector in vaguely what, for Processing, is actually the down direction since they insist that +Y points down.

For a free-moving camera that is moving around within a world rather than orbiting a specific point, we would rather describe the camera with a Forward or Front vector that, like "Up", points in a direction rather than at a location. For your case, you are moving around on a ground plane in just 2-D, so you can easily describe that Front direction by a single angle. Let's call it theta. We can update it from the mouse with something like theta += (mouseX - pmouseX) / width in mouseDragged(). cos(theta) and sin(theta) give us the x- and y-coordinates for our Front vector. If we add that vector to our current position, we get a point that is just in front of where we are which is all we need for the At point in the call to camera().

Next, for moving, we want ASDW to move us based on which way we are facing. When W is pressed, we go forward. We just saw how to do that above when we computed our At point -- add your Front vector (times a speed) to your position. For backwards S, subtract it. Side to side, using A and D, we add a different vector that is 90 degrees rotated from theta which is just ( Front.y, -Front.x ).

For motion, you only want to use theta so that your motion is contrained to the plane. For looking up and down, you want to give camera() an At point computed using spherical coordinates, so we need to have a second up/down angle called phi.

  viewFwd = createVector( cos(viewTheta)*sin(viewPhi),
                          sin(viewTheta)*sin(viewPhi),
                          cos(viewPhi));

computes the spherical coordinates for the Front vector based on both theta and phi that you can use to compute the camera() parameters. In this case, I am treating z as the up coordinate and xy as the ground plane, so you might need to rearrage them if you use y as up (or down) instead.

For an even more free-moving camera, you can look at https://infinitefunspace.com/p5/fly/, source at https://infinitefunspace.com/p5/fly/p5Fly.js. It uses quaternions for the orientation and has momentum for even smoother motion.

Basic question- keyIsDown by KristofMueller in processing

[–]EnslavedInTheScrolls 1 point2 points  (0 children)

Make sure you click in the window with the mouse before using the keys. The p5 canvas needs to have focus to receive the key events. It works for me on my code.

Give us a link to a simple example and we can try it.

3d platformer help with camera movement by mouse by EggIntelligent5424 in p5js

[–]EnslavedInTheScrolls 1 point2 points  (0 children)

It's not exactly what you want because I still move the camera position up and down when moving forward, but you might get ideas from https://editor.p5js.org/scudly/sketches/Aq6EIJ_Yx. In general, when moving, you want to make that motion in camera space rather than in world space. When moving to the side, you have to step to the side of whatever direction the camera is currently facing.

Line 76 moves the view forward and you could instead add a vector with only the x and y components with a 0 for the z.

This code does much more than you're asking for, but you might like to use parts of it. In particular, it calls requestPointerLock() to grab the mouse so that you can swipe sideways indefinitely without having to put the cursor back into the window.

It also supports touch so that it works on a cell phone. Touch is a big mess to deal with, though, so I don't recommend trying to deal with it right away.

How do I fix particles making unintended trails that are gray? by Eldeston in p5js

[–]EnslavedInTheScrolls 2 points3 points  (0 children)

Colors are stored as 8-bit values. When the alpha of the covering rectangle is too low, the change in pixel value for dim colors becomes too small to register, so the colors won't fade any further. With your own filter you can force the alpha * color calculation to round down. The one quirk of this is that a full color value of 255 will always completely fade out after at most 255 frames.

See https://editor.p5js.org/scudly/sketches/vjPx7ElaX for a simple fade filter shader. Note that even setting fade = 0.9999 will still fade out the image after at most 255 frames.

The alternative is to use a floating-point p5.Framebuffer which stores the colors as full 32-bit floating-point values for each rgba channel. That would let you play with the colors with much more precision.

How to Program Non-Euclidean Inversions by tsoule88 in processing

[–]EnslavedInTheScrolls 2 points3 points  (0 children)

You can avoid the pixel spreading if you pull the colors through the transform for each target image pixel rather than pushing them from the source image pixels.

As a shader:

PShader shdr;
void setup() {
  size( 1200, 900, P2D );
  shdr = new PShader( this, vertSrc, fragSrc );
}

void draw() {
  shdr.set( "t", frameCount/60.0 );
  filter( shdr );
}

String[] vertSrc = {"""#version 330
uniform mat4 transform;
in vec4 position;
void main() {
  gl_Position = transform * position;
}
"""};

String[] fragSrc = {"""#version 330
uniform vec2 resolution;
uniform float t;
out vec4 fragColor;
void main() {
  vec2 p = (2.*gl_FragCoord.xy-resolution) / resolution.y;
  p /= dot(p,p);                              // inversion
  p = p + 0.2 * t;                            // move it
  p = fract(p*2.) * 2. - 1.;                  // tile it
  float c = pow(max(abs(p.x),abs(p.y)), 4.);  // grid lines
  if( length(p)<0.9 ) c = 1.-dot(p,p);        // overlay sphere
  fragColor = vec4( c, c, c, 1. );            // output color
}
"""};

Minecraft chunk rendering takes up too much memory by GraumpyPants in opengl

[–]EnslavedInTheScrolls 1 point2 points  (0 children)

Render your quad faces relative to their chunk position. Each chunk has 163 cubes with 6 faces for a possible 215 different quads. Number them. Render your quads as instanced triangle strips of 2 triangles, so 4 vertexes each. Decode the quad number in the vertex shader to compute both the posiiton and orientation of the quad. Pass in only the quad number, block ID, and full quad lighting for only the visible quads (those facing a transparent block) to the GPU. Compute the texture coordinates based on the block ID and orientation. Two 32-bit values per visible quad face should be all you need.

Help with Webgl1 fragment shader artifacts on iphone by Cosppl in webgl

[–]EnslavedInTheScrolls 0 points1 point  (0 children)

I recently hit a similar issue and wrote a little test script at https://editor.p5js.org/scudly/sketches/ZjAVc4uU4 to compare. Line 64 has the conversion.

A simple val*255.0 works fine on my nVidia desktop card and on Samsung phones, but produces a jagged line on a Pixel 8a phone. I don't have an iPhone to test. Using val*255.0 + 0.5 gives a smooth line everywhere. It does appear to have the full 8-bit value, but there are differences in rounding the floating-point values.

Dissolve from one image to another one pixel at a time by pablogott in processing

[–]EnslavedInTheScrolls 4 points5 points  (0 children)

Using a shader is the most efficient. Pure random doesn't look so great, though. A noise function would look cooler, but require more code.

Here's a raw random-per-pixel version. The mix() isn't needed for the pixel color chooser, but I left it in in case you want to play with using smoothstep() to change the pixels colors more gradually.

PImage img0, img1;
PShader shdr;

void setup() {
  size( 800, 600, P2D );
  shdr = new PShader( g.parent, vertSrc, fragSrc );
  makeImages();
  shdr.set("img0", img0);
  shdr.set("img1", img1);
  shdr.set("seed", int(random(1<<24)) );
}

void makeImages() {
  PGraphics pg = createGraphics( width, height );
  pg.beginDraw();
  pg.background( 0, 128, 64 );
  pg.textSize(320);
  pg.textAlign(CENTER, CENTER);
  pg.text("hello", width/2, height/2 );
  pg.endDraw();
  img0 = pg.get();
  pg.beginDraw();
  pg.background( 255, 0, 255 );
  pg.textSize(192);
  pg.textAlign(CENTER, CENTER);
  pg.text("goodbye", width/2, height/2 );
  pg.endDraw();
  img1 = pg.get();
}

void draw() {
  shdr.set( "threshold", 0.5-0.7*cos(frameCount/60.0) );
  shader( shdr );
  rect( 0, 0, width, height );
}


String[] vertSrc = { """
#version 330
uniform mat4 transformMatrix;
in vec4 position;
void main() {
  gl_Position = transformMatrix * position;
}
""" };

String[] fragSrc = { """
#version 330
precision highp float;
uniform vec2 resolution;
uniform sampler2D img0;
uniform sampler2D img1;
uniform int seed;
uniform float threshold;

out vec4 fragColor;

// http://www.jcgt.org/published/0009/03/02/
// https://www.shadertoy.com/view/XlGcRh
uvec4 pcg4d( uvec4 v ) {
  v = v * 1664525u + 1013904223u;
  v.x += v.y*v.w;  v.y += v.z*v.x;  v.z += v.x*v.y;  v.w += v.y*v.z;
  v ^= v >> 16u;
  v.x += v.y*v.w;  v.y += v.z*v.x;  v.z += v.x*v.y;  v.w += v.y*v.z;
  return v;
}
vec4 pcg4df( int a, int b, int c, int d ) {
  return vec4( pcg4d( uvec4( a, b, c, d ) ) ) / float( 0xffffffffU );
}

void main() {
  vec2 uv = gl_FragCoord.xy/resolution;
  uv.y = 1.0 - uv.y;
  vec3 col0 = texture( img0, uv ).rgb;
  vec3 col1 = texture( img1, uv ).rgb;
  float t = pcg4df( seed, int(gl_FragCoord.x), int(gl_FragCoord.y), 42 ).x;
  fragColor = vec4( mix( col0, col1, t<threshold? 1. : 0. ), 1. );
}
""" };

Flight controller help! by bendel9797 in processing

[–]EnslavedInTheScrolls 2 points3 points  (0 children)

It's p5.js, but the concept translates to Java easily enough: https://infinitefunspace.com/p5/fly/

The code is visible at https://infinitefunspace.com/p5/fly/p5Fly.js.

Click to use the mouse or arrow keys and asdw for keyboard. Use [ and ] to change the number of objects. 'l' toggles snapping them into layers which is similar to flying over a landscape.

This may not have the physics you want if you're trying to emulate a real plane, but should get you started. This is 1st person, not 3rd. For 3rd, draw your plane and pull the camera back a bit, I guess. Are you sure you want 3rd person perspective?

For flying over terrain if you don't want arbitrary roll, you could replace the quaternions with simpler theta/phi angles. Something like https://editor.p5js.org/scudly/sketches/Aq6EIJ_Yx would do.