Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect variable description for vertex() methods #506

Closed
hx2A opened this issue Sep 9, 2024 · 6 comments · Fixed by #532
Closed

Incorrect variable description for vertex() methods #506

hx2A opened this issue Sep 9, 2024 · 6 comments · Fixed by #532

Comments

@hx2A
Copy link
Collaborator

hx2A commented Sep 9, 2024

The vertex() method documentation contains this signature:

vertex(
    v: npt.NDArray[np.floating],  # vertical coordinate data for the texture mapping
    /,
) -> None

This signature is referring to the feature undocumented in Processing that allows you pass 37 floats to this method to set the vertex's position, stroke, fill, stroke weight, etc. The variable name happens to be v, which is the same as the v typically used for uv texture mapping. This explains the incorrect parameter description above.

To address this, the generator code needs to intercept the Java info and override this part of the data. Setting the parameter name to something unique will let me write the correct description and include it in the documentation.

Py5Graphics.vertex() also has this same problem.

Are there other methods that have incorrect variable descriptions?

@villares
Copy link
Collaborator

Is it risky to implement a py5 interface to this undocumented Processing signature?

A "vectorized" vertices() were you could pass as arrays/sequences optional keyword arguments strokes=, fills=, and stroke_weights= sounds like smashing useful.

Maybe the selective/optional part is harder (you would have to figure the current attributes to pass along with the user defined ones). But even so... to be able to use vertices() like this would be very nice.

@hx2A
Copy link
Collaborator Author

hx2A commented Sep 13, 2024

Is it risky to implement a py5 interface to this undocumented Processing signature?

A "vectorized" vertices() were you could pass as arrays/sequences optional keyword arguments strokes=, fills=, and stroke_weights= sounds like smashing useful.

Hmmm, this is an intriguing idea. Perhaps you could write something like:

py5.styled_vertex(10, 20, 30, stroke="red", fill="white")

And then it would construct the 37 float array for you and pass that. Would that be better though? I suspect the effort of constructing array would negate the performance gain of using this feature, which is really the only upside of it existing.

to be able to use vertices() like this would be very nice.

An idea I'd like to try is to use a Pandas DataFrame, or at least, show people how to set up the DataFrame correctly, with the column names in the right order, such the DataFrame could be passed to vertices(). This would be efficient and is best way I can think of to make this feature useful.

@villares
Copy link
Collaborator

villares commented Sep 13, 2024

I was initially thinking about contexts where you would be creating a collection of things to display anyway.

Use cases that would be cool to optimize:

  • Lots of points of different colors and sizes
  • Triangular boids of different colors
with py5.begin_shape(py5.POINTS):
     py5.styled_vertices(coords, strokes=colors, stroke_weights=sizes )

with py5.begin_shape(py5.TRIANGLES):
     py5.styled_vertices(coords, fills=colors, no_stroke=True )

Having to build a Py5Shape object and then set_strokes() and set_fills() feels a bit awkward... and there is no set_stroke_weights()...

Otherwise, maybe we could go closer to the idea from the GeoDataFrame conversation we had... columns for attributes and one for geometry.

Crazy idea, could we have a Py5ShapeDataFrame inspired by the GeoDataFrames maybe?

@hx2A
Copy link
Collaborator Author

hx2A commented Sep 13, 2024

Having to build a Py5Shape object and then set_strokes() and set_fills() feels a bit awkward... and there is no set_stroke_weights()...

Right, it is an awkward method. It was created because it needed to exist for the trimesh integrations to work efficiently. That's the best use for it. I wouldn't use it any other time.

I do like the GeoDataFrame idea quite a bit. And perhaps a Py5ShapeDataFrame object could be useful also? It could work kind of like Py5Shape objects but provide tabular access to the shape data. Internally it would use the vertex() method with 37 columns to create the shape.

@hx2A
Copy link
Collaborator Author

hx2A commented Sep 20, 2024

@villares something I was working on required me to attempt to use the vertex() method with a 37 column table. Unfortunately it didn't work. TL;DR: this seems to be part of functionality that probably worked in Processing a long time ago but now is useless. I will remove that method signature since it is misleading.

If anyone is curious, here's the code I used to test this:

import numpy as np
import pandas as pd

import py5

COLUMN_NAMES = "X Y Z FILL_R FILL_G FILL_B FILL_A TEXTURE_U TEXTURE_V NORMAL_X NORMAL_Y NORMAL_Z EDGE STROKE_R STROKE_G STROKE_B STROKE_A STROKE_WEIGHT TRANS_X TRANS_Y TRANS_Z VIEWSPACE_X VIEWSPACE_Y VIEWSPACE_Z VIEWSPACE_W AMBIENT_R AMBIENT_G AMBIENT_B SPECULAR_R SPECULAR_G SPECULAR_B SHINE EMISSIVE_R EMISSIVE_G EMISSIVE_B BEEN_LIT HAS_NORMAL".split()


def make_data():
    data = pd.DataFrame(np.zeros((5, len(COLUMN_NAMES))), columns=COLUMN_NAMES)

    data["X"] = np.random.uniform(0, py5.width, size=data.shape[0])
    data["Y"] = np.random.uniform(0, py5.height, size=data.shape[0])
    data["Z"] = 0

    # everything below this line is ignored by JAVA2D renderer
    data["STROKE_R"] = np.random.rand(data.shape[0])
    data["STROKE_G"] = np.random.rand(data.shape[0])
    data["STROKE_B"] = np.random.rand(data.shape[0])
    data["STROKE_A"] = 1.0

    data["FILL_R"] = np.random.rand(data.shape[0])
    data["FILL_G"] = np.random.rand(data.shape[0])
    data["FILL_B"] = np.random.rand(data.shape[0])
    data["FILL_A"] = 1.0

    data["STROKE_WEIGHT"] = np.random.uniform(1, 50, size=data.shape[0])

    data["EDGE"] = 1.0

    data["TRANS_X"] = data["X"] - py5.width / 2
    data["TRANS_Y"] = data["Y"] - py5.height / 2
    data["TRANS_Z"] = -866.025

    data["VIEWSPACE_X"] = data["X"]
    data["VIEWSPACE_Y"] = data["Y"]
    data["VIEWSPACE_Z"] = -0.818
    data["VIEWSPACE_W"] = 866.025

    py5.println(data.T)

    return data


def setup():
    py5.size(500, 500, py5.JAVA2D)  # get unstyled points
    # py5.size(500, 500, py5.P3D)  # nothing drawn to screen

    py5.stroke_weight(20)  # leave this

    data = make_data()

    print(data.shape)
    py5.background(255)

    with py5.begin_shape(py5.POINTS):
        py5.vertices(data.values)
        # above line is the same as:
        # for i in range(data.shape[0]):
        #     py5.vertex(data.iloc[i].values)


py5.run_sketch()

You can see the 37 columns I was talking about. With the JAVA2D renderer it only looks at the X and Y values. For the OpenGL renderers, it puts the data in a vertices array that is in the abstract PGraphics class but doesn't actually get used the P2D or P3D renderers. Basically the vertices data gets stored someplace that is ignored.

Since this is in the abstract PGraphics class it must have been used by the other renderers at some point, but the code in the subclasses evolved so that it is no longer used.

Although there is one renderer that this would work for: the py5shapely renderer that I have not yet open sourced. When building that renderer I came across this 37 column data table thing and used it in the py5shapely implementation. That's why I knew about this. I thought the other renderers supported it though. Oh well! It was an idea.

Anyhow, since this is misleading, I'll remove it from the documentation since nobody should use it and it isn't supported by Processing.

@hx2A hx2A mentioned this issue Oct 28, 2024
@hx2A hx2A closed this as completed in #532 Oct 28, 2024
@hx2A
Copy link
Collaborator Author

hx2A commented Oct 28, 2024

@villares , with this fix, I've closed the issues I'd like to include in the next release. My plan is to leave #496 for later. Perhaps I'll do one more release before the end of the year that includes #496 and whatever Live Coding bugs emerge once more people start using it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants