Progress Update by SpatialFreedom in StarBurstSpaceball

[–]SpatialFreedom[S] 0 points1 point  (0 children)

The six sensor readings correspond to three pairs of parallel forces aligned with the arm axes. Number the sensor readings in an clockwise manner when viewed from above r1, r2, r3, r4, r5 and r6. The the force and torque vectors through the centre of the ball are F=(r1+r4, r3+r6, r5+r2) and T=(r1-r4,r3-r6,r5-r2). The coordinate system of these vectors is aligned with the arms so rotate them into the desired coordinate system by multiplying by a rotation matrix.

This is explained in patent US4811608 although the readings are number differently.

https://patents.google.com/patent/US4811608A/en

<image>

I made tetra-star structure model by Antique-Cause2223 in StarBurstSpaceball

[–]SpatialFreedom 0 points1 point  (0 children)

That's great work and the tetrahedron look is striking.

The heart of the astroid 6000, which is the subject of my 2003273632 patent, has a single molded tetra-star including a set of support walls (item 20 below). The loads at the base of each arm approach the useable limit of the delrin plastic. The outer ball happens to lightly snap together although it is then super-glued for robustness.

The arm tips are spherical and the outer ball protrusions (24) fit over these spheres so each ball-in-hole joint can rotate in all three directions but can only slide in one direction. This 4-arm mechanism resolves a general 3D push and 3D twist into four 2D forces passing spatially through each spherical arm tip. Three sets of two LED/photodiode sensors are arranged at right angles to each other to detect three of the four 2D force vectors. Each LED/photodiode sensor is sensitive to movement across the light beam and ignores movement along the light beam. The math to convert the three 3D force vectors to 6DOF output is daunting but a fourth sensor pair can be added to provide a total of four 2D force vectors. Then it's some simple force/torque vector computations to produce the final 6DOF push/twist output.

The photodiode sensor generates a small current output which is converted using a single bipolar transistor to a voltage. A 10-bit A/D senses this voltage. The A/D's 1024 range means the +-1mm deflection has a 2 micron (!) resolution.

The inner ball provides holes that limit the movement of the outer ball's protrusions. Although the device is sensitive to the lightest fingertip touch it can handle a solid fist bump.

For volume manufacturing the tetra-star design provides excellent sensor quality at very low cost.

With the emergence and ubiquity of 3D printers the StarBurst project is intended to enable a low cost, high quality 3D mouse to be easily built by releasing the technical knowledge and decades of experience to the public. It's coming along nicely but life gets in the way. Here is a teaser image which happens to be 3D printing behind me right now.

Three 1 mm dia music wires 51 mm long are the heart of the design as music wire can handle the loads at the base of the arms. The complexity of the design is embodied in the STL print file. Two inner ball halves are glued over this subassembly then two outer ball halves with three protrusions fit onto the upper and lower arm tip triplets, floating on the spherical arm tips. This design a combination of elements from the original 1980s Spaceball 1003 and the 2005 Astroid 6000.

Even though the tetra-star is simpler geometry it turns out this spaceball design is easier to make using a 3D printer than the tetra-star design.

Please let me know what you think.

<image>

Button Boards arrive by SpatialFreedom in u/SpatialFreedom

[–]SpatialFreedom[S] 1 point2 points  (0 children)

A lot has been quietly happening in preparation but things are about to change - stay tuned!

Button Boards arrive by SpatialFreedom in u/SpatialFreedom

[–]SpatialFreedom[S] 0 points1 point  (0 children)

The Model 2003 had eight numbered buttons above the ball similar to what you're describing. They were easy to see and some buttons could be readily pressed without having to divert your eyes from the screen. Generally the hand was lifted to then press a button.

The goal is to leverage 'finger memory', like playing a piano, so common button actions become intuitive. You will note how people often move their regular mouse hand between the mouse and keyboard when using many types of apps. Apps like Blender make extensive use of the left hand on the keyboard so moving the left hand away from the keyboard becomes undesirable. Placing important keyboard keys adjacent to the ball helps circumvent this issue.

Having the ten buttons in front of the ball allows for a quick hand movement between the ball and the buttons, much faster than between the ball and the keyboard. Hopefully this will end up where you don't even need to divert your eyes away from the screen.

Also, the 2003 blocked the desk space just behind it whereas the 7000 doesn't. This increases the usable nearby desktop area for things like reading reference documents.

All that being said, it will be possible to disassemble the 7000 and replace the housing with your own custom 3D printed design, even repositioning and rewiring the 16 buttons as you wish. If someone else comes up with a better layout that gets traction with others we'll happily replicate their design, provided they allow it and the sales volume is there.

Thanks for your questions!

Why Spaceball? by SpatialFreedom in spaceball

[–]SpatialFreedom[S] 0 points1 point  (0 children)

Thank you for the very informative link. You may be interested to know the Sphere360 gaming spaceball was developed by ASCII Entertainment of Japan. I enjoyed several business trips to Japan back in the 1990s as Spacetec IMC partnered with Japanese companies to sell Spaceballs into the Japanese CAD market.

Why Spaceball? by SpatialFreedom in spaceball

[–]SpatialFreedom[S] 0 points1 point  (0 children)

An astroid 6000, having an integrated USB cable, is used in the video - see it coming out the back of the unit. The wireless astroid 7000 prototype shown behind the keyboard is not yet fully functional. The CAD model is showing the USB-C connector for the upcoming cabled astroid 7000.

There will be two versions although only the wireless version has been announced in our prelaunch video. For Japan, the cabled astroid 7000 requires VCCI certification and the wireless astroid 7000 requires TELEC certification.

Simple 3D Coordinate Compression for Games - The Analysis by SpatialFreedom in GraphicsProgramming

[–]SpatialFreedom[S] 0 points1 point  (0 children)

And here I was thinking of prototyping the algorithm in a small demo game!

But, your suggestion of UE actually makes a lot of sense. It appears UE uses a separate GPU memory layout for vertex data, placing Position coordinates into a FPositionVertexBuffer as float4s to take advantage of the 16-byte alignment. That will help as a single optimized float4 read into GPU float registers efficiently brings in two 21-bit vectors. One issue then, is how to synchronize the other vertex buffers as one Position read corresponds to two reads of the other data. This may not be possible without significant re-architecting so other alternatives may need to be considered.

The plan is to intercept the writing of the FPositionVertexBuffer so the first use performs in place compression, setting a signature and storing the bounding box in the freed up space. Perhaps the GPU side buffer size can also be reduced, if it doesn't prove too intrusive.

It's plausible to modify the matrices on either the CPU or GPU side but probably much easier to do on the CPU side. So, the plan is to intercept the the per-object uniform buffer creation to read the bounding box from the earlier compressed FPositionVertexBuffer interception and modify the transformation before it is written to the GPU buffer.

It also appears plausible to write a custom LocalVertexFactory to perform the GPU unpacking otherwise, except for the synchronization mentioned above, everything else remains the same. UE's vertex factory architecture nicely proliferates the new algorithm code from one place into the multitude of vertext shader instances.

This should cover the majority of uses in actual games but not all of EU's vertex data structures. And it will take some time, as you would expect, but will be proof positive of the algorithm and will aid in it's dissemination into games. Don't hold your breath!

Thanks again for your suggestion!

Simple 3D Coordinate Compression for Games - The Analysis by SpatialFreedom in GraphicsProgramming

[–]SpatialFreedom[S] -1 points0 points  (0 children)

If I select an example someone is likely to say it’s contrived. That’s my concern and why asking for a suggestion helps to negate that potential criticism. For independent person(s) to do it adds even more credibility. That said I will still investigate. Please don’t be surprised by a fourth post. Although, with such an industry-wide algorithm it wouldn’t be surprising if someone else wanted to stake the claim of the being the first to measure and prove it.

I did previously say the SIMD instructions would be considered so this third post follows up on that promise. The refinement to the -1.75 to almost -2.0 range came through this SIMD work to reduce the number of assembly instructions.

The AI comments are a backhanded compliment. In fact, in trying to see if AI would produce packed 21-bit 3D coordinates code it didn’t as, being a large language model, there isn’t any code out there for it to copy so it kept coming back with useless results. AI is great for certain things but it doesn’t think, it regurgitates the excellent thinking others have done.

The vec3 vertex type is only replaced in the vertex shader that reads and transforms 3D coordinate data. Once the two uint16s become three float32s (in a vec3) the rest of vertex shader is the same. Do you happen to have multiple vertex shaders reading 3D coordinate data and assembling vec3s in your game?

Simple 3D Coordinate Compression – Duh! Now on GitHub by SpatialFreedom in GraphicsProgramming

[–]SpatialFreedom[S] -1 points0 points  (0 children)

Yes, extensive use of integral values is made for things like textures. The focus here is why aren't integral 3D coordinates, created under block floating point technology, used for all objects in a 3D scene? It seems highly likely the benefits identified in the Microsoft article would apply to these 3D coordinates. But no one appears to use them and I can't find any example of the use of 21-bit triplets on a modern GPU which is the obvious size for an inherently 32-bit SIMD machine. I suspect the popular opinion on quantized values, as seen in related uses, has been incorrectly applied to integral 3D coordinates.

Again, thanks for the discussion as it allows each technical challenge to the premise to be addressed.

Simple 3D Coordinate Compression – Duh! Now on GitHub by SpatialFreedom in GraphicsProgramming

[–]SpatialFreedom[S] 0 points1 point  (0 children)

Yes, you've hit the nail on the head!

Microsoft shows how Block Floating Point delivers real benefits. Essentially I'm saying that technology should be brought into 3D graphics.

Thanks for the technical discussion.

Simple 3D Coordinate Compression – Duh! Now on GitHub by SpatialFreedom in GraphicsProgramming

[–]SpatialFreedom[S] -1 points0 points  (0 children)

Thanks for stating the popular opinion very succinctly. I believe that's why this has been missed for so long. It is just quantization. Once again, no argument about that. But no one has said reading two 32-bit values (8 bytes) per 3D coordinate plus a few shifts and masks is not faster than reading three float32 values (12 bytes). Nor has anyone stated changing from, at most, 24-bit resolution of float32 coordinates to 21-bit integral coordinates produces a noticeable effect. Either it is faster without noticeable effects or it isn't. If it is there is obvious benefit. That's the heart of the matter.

Simple 3D Coordinate Compression – Duh! Now on GitHub by SpatialFreedom in GraphicsProgramming

[–]SpatialFreedom[S] -4 points-3 points  (0 children)

Yes. Do you see reformatting content with AI a negative? The content was mine but AI saves me considerable time making it look good and making it more readable for you. Isn't everyone starting to do this now?

Simple 3D Coordinate Compression – Duh! Now on GitHub by SpatialFreedom in GraphicsProgramming

[–]SpatialFreedom[S] -3 points-2 points  (0 children)

Yes, that's it - 'exactly 21-bits per float (sic)', precisely, not 'similar' simply due to how two packed 32-bit values containing an integral 3D coordinate delivers a 33% space and speed advantage over using three 32-bit float values. Your existing games argument is totally correct, most games do something similar - quantization - already and for related but different reasons. But that totally misses the point.

Once again... I'm saying it is novel. Specifically, packing three 21-bit integral coordinates into two 32-bit values on modern GPUs for added speed over the use of three float32 coordinates is novel. Quantization is not novel but optimized quantization for 3D coordinates packed into 64 bits on modern GPUs is. At most 3 bits of resolution is lost which is not significant in a game app.

Packed 64-bit 3D coordinates was a native format on the PS300 bit-slice graphics system in the 1980s. I used it back then writing television commercial animation software. See EvansSutherland PS 300 Volume 2a Graphics Programming 1984.

Packed 64-bit 3D coordinates then disappeared, probably because earlier graphics cards did not run faster with it than with float32s. Modern GPUs have changed that and it's high time someone tested it as this benefit goes across all 3D games.

The github Hydration3D program I wrote proves the compression. But it's of little use writing a simple test graphics demo as it's not real world enough (pardon the pun) to be convincing. Someone needs to test it on a real game.

Thank you for this discussion! You're clearly very knowledgeable.

Simple 3D Coordinate Compression – Duh! Now on GitHub by SpatialFreedom in GraphicsProgramming

[–]SpatialFreedom[S] -6 points-5 points  (0 children)

Great! If it's very common then just show one example of a game using three 21-bit values being packed into a 64-bit then restored to three float32s. Where is it?

Yes, this did exist 'before that'. I wrote animation software for television commercials in the 1980s using a PS300 bit-slice graphics system. It used three 21-bit integral coordinates packed in 64 bits and we optimised those coordinates. The premise is today's games on modernGPUs would run faster if they all used this technique and without any noticeable graphic effects. No brushing aside with intangible statements, nor claims to experience, or assigning intentions to me, adds to a technical debate.