Jump to content

SharpEars

Registered Member
  • Posts

    196
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by SharpEars

  1. I think that I understand what you are saying, but to allow for such a "special case of non-planar polygons," is to violate the concept of planarity and is very confusing to the user. If this is in fact the cause of such a quad being marked as non-planar by Cinema 4d's mesh checking, even though by all sensible (and mathematical) definitions the quad is in fact planar (FP error, aside, which I don't believe is present in this case and I am sure that a similar case can be created with all integral coordinates to guarantee no FP error, from a point coordinate perspective, at least), then Cinema 4D mesh-checking needs to be modified to take this "special" case into account and not mark it as non-planar. The fact that it is doing so is very misleading from the perspective of the user. Mesh-checking is used quite frequently to ensure that one's point layout and alignment is not causing quads to be non-planar (i.e., "rounded" due to the non-planarity of the triangles that they are subdivided into). To allow for an edge-case such as this only leads to confusion, unless there is an extremely good reason why such quads should be marked as "non-planar." I purposefully put this term into quotes in the preceding sentence, since it is being used in a very "off-label" (i.e., non-conventional and non-mathematical) way to as an indicator to the user for the presence of this edge case.
  2. Version info: Cinema 4D 2023.2.1 There appears to be some sort of bug in the polygon planarity detection of the Mesh Checking functionality. An example: The quad in question as part of a polygonal object: Coordinate Manager showing zero depth along the X dimension: The quad as displayed in an excerpt of the Structure Manager set to Polygons Mode: BTW, there are no N-Gons present in this object - all triangles and quads. The actual point positions of the quad's points via Structure Manager - Points Mode: Tests already performed: - There are no overlapping/coincident points - Optimize to a reasonable distance has been performed (no change) I even loosened the Mesh Checking settings for the Not Planar Polygons detection to 5 degrees to see if that makes any difference (or any other large angle I attempted, including: 0, 45, 90, 135 [why are angles >90 even permitted?] degs). It does not, the quad is still detected as non-planar: Here is a Python script that shows the coords of the polygon's points with more precision: Relevant subset of code if __name__ == '__main__': ob=doc.GetActiveObject() for point_idx in (3,26,27,43): point_coords=ob.GetPoint(point_idx) x=0.0000000001 print(f'point {point_idx:2n} coords with more precision: ' f'{{{point_coords.x:.15f}' f', {point_coords.y:.15f}' f', {point_coords.z:.15f}}}') Output point 3 coords with more precision: {0.000000000000000, 1.333333253860474, -0.666666626930237} point 26 coords with more precision: {0.000000000000000, 1.666666597127914, -0.333333283662796} point 27 coords with more precision: {0.000000000000000, 1.375962178533936, -0.322070503298034} point 43 coords with more precision: {0.000000000000000, 1.354647725510147, -0.494368489831828} This quad is co-planar with and, for this specific case, literally lies on the Y-Z plane. It has no no X depth and yet is deemed to be non-planar by the Mesh Checking functionality.
  3. This is easily testable via Python, thanks for the tip and I'll do the test and see what shakes out.
  4. Steps to easily reproduce: Create a Platonic Object with the following properties: Create a Subdivision Surface generator as its parent, as the hierarchy of the following image depicts and with the properties of the generator set as shown (n.b., the change of the Type property to OpenSubdiv Loop to create more geometry while maintaining a reasonable level of subdivision) : Select the Subdivision Surface object and collapse its entire object hierarchy into a single editable polygonal object (C keyboard shortcut), which I have renamed to Editable Subdivided Platonic in the forthcoming image showing the results. The resulting object is shown below from the front (i.e., via a Front View) and included are its point stats as show in Point Mode.: Switch back to Object Mode and bring up the Structure Manager as well as the Axis Center dialog box. Compare the placement location of the Object Axis when Points Center is checked vs unchecked in the Axis Center dialog, as well the ensuing changes to the object's (local) point coordinates in the Structure Manager. Once again here are the remaining settings for the Axis Center dialog. These should be left intact for the aforementioned test (except for Auto Update - your choice, depending on whether your prefer to manually update or not and of course toggling of the Points Center checkbox, a property that must be toggled in order to reproduce the bug being reported): Please let me know if this is a bug that can be readily fixed for the next release of Cinema 4D.
  5. Version info: Cinema 4D 2023.2.1 The Axis Center dialog can be used to center the Object Axis of a polygonal object (we'll assume no anomalous points, edges, or polygons are present in said object). In Model Mode, with the object selected, we bring up the Axis Center dialog and ensure that Action is set to Axis to and Center is specified as All Points. Include Children, Use All Objects, and Viewport Update (not relevant) are unchecked for the sake of discussion. Auto Update is checked, for convenience, so we can see the effect of our choice, immediately and without having to click on the Execute button at the bottom of the dialog. We will compare the following two sets of dialog settings and accompanying scenarios: Scenario #1 - Use the Points Center override to fix all of the X/Y/Z relative positioning percentage values to zero The Points Center checkbox is checked, thus disabling the ability to make any changes to the X/Y/Z percentages and making said percentages inapplicable, resulting in the following set of settings for the dialog: vs Scenario #2 - Use X/Y/Z percentage values to define relative positioning among the points The Points Center checkbox is not checked, and the X/Y/Z percentages are all set to 0%: Now, one would expect that these two possibilities would produce identical outcomes, but this appears to not be the case and this, in my opinion is a bug. There is a slight difference in the placement of the Object Axis with the placement for Scenario #1 (i.e., using the Points Center checkbox) being the one that is in error. For my scenario, I have a fairly complex polygonal solid shape with 892 points, 1897 edges, and 969 polygons. It should be possible to reproduce this behavior with a far more simple shape, but I will enclose an excerpt image showing the Object Axis placement for the two scenarios from the perspective of the Front View to show that they do in fact differ. In any case, I will show additional screen shots, the first being the position that results from the settings of Scenario #1: Points Center: Next is the position that results from the settings of Scenario #2: Explicit X/Y/Z offsets of 0%: And to make the difference more visible, an overlaid version of both scenarios: If you look at any of the depictions of the three axis arrows (X. Y, or Z) in the overlaid scenario, you will see a small offset along the X axis, which appears as a blur, and a still smaller offset along Y. I will show the values of one of the first points of the object for both scenarios, so that the numeric difference can be presented empirically: Scenario #1 - incorrect axis placement - Point 0 coords post axis centering operation: -4.777, -35.7576, -0.0137 Scenario #2 - correct axis placement - Point 0 coords post axis centering operation: -4.75, -35.75, 0 Characterization of Differences The following are the (absolute value) percentage errors in the X, Y, and Z directions. Each of these represents the error in offset along each of the three Euclidian axes as a percentage of the object's corresponding size (i.e., width, height, and depth, respectively) along that direction: X=0.257%, Y=0.007%, Z=0.130% In my opinion, the difference is too large to be attributed to mere Floating Point Error, especially within the context of the scale used for the project: Project Scale: 1 cm Effective Scale: 1.000 x (No scaling) Object size (X x Y x Z): 10.5 cm x 108.5 cm x 10.5 cm Grid spacing shown in the images above: 0.5 cm Both Display and Project units are: cm
  6. There's a free preview of the first 100 pages of the book over at google books: https://books.google.com/books/about/Maxon_Cinema_4D_2023_Modeling_Essentials.html?id=MOuxEAAAQBAJ&printsec=frontcover&source=kp_read_button&hl=en&newbks=1&newbks_redir=0&gboemv=1&ovdme=1&ov2=1#v=onepage&q&f=false ..., so you can make at least some sort of assessment of its quality/content.
  7. Here's a thought: Layer enhancements allowing for the assignment of multiple layers to an object with overrides and perhaps some notion of property inheritance between ancestor/descendent layers.
  8. Here is another image that shows how Cinema 4D calculates the normal for a non-planar quad, with a detailed description of the math and elements shown appearing right below it: The violet and magenta line segments are simple Segment Guides that I added to help represent the diagonals of this non-planar quad. In terms of calculations, the ▲abc Normal is averaged with the ▲cda Normal to arrive at the final normal being sought, one that matches Cinema 4D's calculations for the quad "polygon," and is represented in the image by the ▰ abcd Normal. If you look closely, you can even see the yellow Cinema 4D polygon normal protruding from the arrow-tip of this blue (calculated) average normal. I hope that this picture explains it well. Each of the two triangle normals' directions were arrived at by calculating the normalized cross-products of two edge vectors from the quad, with: - ab x bc used for the ▲abc Normal, and - cd x da used for the ▲cda Normal With the directions of the normals calculated, the yellow illustrative arrows representing them in the image were positioned so that they each originate from the centroid (aka barycenter) of their respective triangles, as one is normally accustomed to seeing them. The component triangle normal vectors were then used in the following calculations: 1. Find the average of the two triangle normals using the formulat: (▲abc Normal + ▲cda Normal) / 2 2. Normalize the resulting vector to be a Unit Vector 3. Show the final outcome as an arrow labeled with: ▰ abcd Normal The final normal's arrow is positioned so that it starts at the center of the quad, defined as the average of its four points' coordinates (the points are labeled in the image with a, b, c, and d). This is identical to how Cinema 4D calculate the quad's center, as you can see from the polygon normal's start point in the image (and also happens to be the World origin, as well as the quad's axis as I positioned it).
  9. I just want to mention that you have to be very careful with these sorts of scripts when dealing with non-planar quad polygons that may be present and selected. For example, consider the following view through a Parallel Camera (to minimize perspective distortion) being used as the Viewport camera: The magenta diagonal line is a guide that goes from Point 0 to Point 3, representing how this non-planar quad is presently being triangulated by Cinema 4D. The yellow line segment emanating from the center (actually through the center, sort of, since it starts quite a ways below the surface of the quad at what Cinema 4D considers to be the center point of this quad) represents what Cinema considers to be the polygon's normal, with Polygon Normals enabled for display in the Viewport (and shown via Preferences at a 400% scale, so that they are long enough to be "legible"). Similar yellow line segments going through the corner points of the polygon, added in via creative use of a Matrix object (and the little originally white cubes at the vertices that it thinks it is aligning!), overlay a copy of the polygon normal at each point, so that it can be compared with the light gray vertex normals coming up from each point, as rendered by Cinema 4D, since Vertex Normals are also enabled for the Viewport. In addition to all of the above, the point indices are also displayed to label the individual vertices, courtesy of the aforementioned Matrix object. I should also note that there is a slight discrepancy between the direction of the Z Modeling Axis and the quad normal, as displayed. Active tool info: Move Tool in Polygon mode with the Modeling Axis set to Selected/Axis and all else default. This is not a figment of your imagination and this is noticeable regardless of whether the Viewport is using a Perspective camera or a Parallel camera. They are truly pointing in slightly different directions and I don't know which of the two is "correct." Here are the point coordinates making up the sole Polygon of the polygonal object (uncreatively) named nonplanar_quad, as shown via the Structure Manager: Finally, here is what the Python script, quoted from the above post, produces as the polygon's normal for the Polygon comprising our nonplanar_quad object, along with additional data describing the point properties of the object and its sole polygon, as retrieved via the usual Cinema 4D Python API member functions called on the object: The resulting normalized unit-vector (last line of output, above) lacks a Z directional component, even though Cinema's interpretation (via those yellow normal line segments and their leftward tilt in the view) is clearly showing that Cinema 4D thinks a Z component should be present in the polygon normal. The cause of the difference is that the quoted script only uses Polygon points a, b, and c, to calculate the cross-product of rays ba and cb as the polygon's normal. For this non-planar case, polygon point d is also playing a role in what Cinema 4D considers to be the polygon's "true" normal. Of course we are not dealing with a quadrangle or polygon in the mathematical sense here, since it is non-planar, but it can be argued that the (averaged) normal for this non-planar quad should take into account the individual normals of both of the planar triangles from which it is constructed.
  10. The short answer is, "yes." Many gemstones have complex IOR values. In particular, gemstones with a single IOR>=1.65 have highly complex refractive IOR values that differ for different wavelengths, viewing angles, changes in density based on impurities in the stone, etc. In fact, the single real-world dispersion value, which takes complex IORs at different wavelengths into account, is often touted about (e.g., Abbe value) is just an approximation either based on a single wavelength of light, usually tied to a particular Fraunhofer line (e.g., nD based on the Sodium-D line), or a chart across two such lines, referred to as the the differences in IOR between two pre-defined Fraunhofer lines (e.g., B and G lines) at their respective wavelengths. For your specific case of blood, you can see a graph of the refractive index (eta) and extinction coefficient (kappa) at various wavelengths, at: https://refractiveindex.info/?shelf=other&book=blood&page=Rowe ..., a good site for getting complex IOR curves for various materials.
  11. There is no direct support, but you can do this in XPresso using the technique illustrated in the following image: Image Legend Layer Manager followed by Object Manager (partial screencaps) XPresso Editor layout graph showing all nodes and connections The Set Layer By Name is a Python XPresso Node, reachable via New Node -> XPresso -> Script -> Python. You can pop out and use the Expression Editor to enter the code for it, which I will provide, after creating the node in XPresso, resulting in the forthcoming image. Note: I have placed the actual code in text form, so that you can easily copy it into the Expression Editor, below its graphic rendition. from typing import Optional import c4d doc: c4d.documents.BaseDocument # The document evaluating this node # This code is not meant to perform the action described in absolutely the most efficient # way possible (which would potentially replace recursion with iteration and add complexity), # but it is sufficient, commented, and illustrative with regard to how this could be # implemented in a Python XPresso Node. # Perform a recursive breadth first search for a Layer whose name matches the name # provided as the input to the XPresso Node (i.e., LayerName) # Returns: The first (name) matching layer found or None if no layer has the specified name def find_layer(cur_layer: c4d.documents.LayerObject) -> Optional[c4d.documents.LayerObject]: global LayerName # Node input # Check for a name match with the current layer cur_layer_name=cur_layer.GetName() # <Uncomment to debug> print(f'Layer being tested for equality to "{LayerName}": {cur_layer_name}') if cur_layer_name == LayerName: # <Uncomment to debug> print(f'Matching layer found') return cur_layer # If no match, test its sibling layers (if any) cur_sibling=cur_layer.GetNext() while cur_sibling: if layer_found:=find_layer(cur_sibling): return layer_found cur_sibling=cur_sibling.GetNext() # If no match, test its child layers (if any) cur_child=cur_layer.GetDown() while cur_child: if layer_found:=find_layer(cur_child): return layer_found cur_child=cur_child.GetNext() return None def main() -> None: global doc # Document containing this node global Layer # Node output starting_layer=doc.GetLayerObjectRoot().GetFirst() Layer=find_layer(starting_layer) # <Uncomment to debug> # if not Layer: # print(f'A matching layer was not found, for layer name: {LayerName}') A side note for the color purists of the forum. Feel free to skip the following mumbo-jumbo entirely, if you are among those that "just wing it" in the identification and choice of color department. I know that my choice of pink looks like I mistook the color magenta for pink and in fact, being myself a "proud card carrying member" of the fictional Worldwide Association of Self-Proclaimed Color Purists (WASCP), I had the same lingering thought after reviewing this message subsequent to its submission. Any similarity of my pink to magenta is just an unfortunate side-effect of converting through multiple color spaces to arrive at the common denominator of sRGB for a post to this forum. Here is a larger swatch of the Pink from the image above compared to a similarly sized swatch of true Magenta (depicted at the same brightness level). Depending on the color gamut capability of the viewing device you are using to evaluate the following (sRGB color space) comparative image of the two colors , especially if said device is not both capable (at least 100% sRGB) and calibrated (ΔE≤2), any differences between the Pink (left rectangle) and Magenta (right rectangle) may be both subtle and subjective in nature: A really tangential side note For anyone interested, it appears that the forum software auto-converts any .PNG images with an embedded P3 color space to sRGB. Before posting images, I would strongly advise you to perform the conversion yourself, with your (or your conversion software's) own idiosyncratic notions of how such a conversion should be performed - a dish best served cold and with a heap of salt to mask its bitter taste. This advice is especially important if the image happens to contain any colors that are outside of the destination (i.e., sRGB) color space. Having said that, here is an image with an embedded P3 profile and you can judge how the Forum software converts it to sRGB for yourself, because it does appear to be (even more) subtly different when compared to the preceding image of the two swatches. The differences are more apparent for the initially far more saturated right-hand magenta swatch, since the P3->sRGB conversion process being used by the Forum software is likely very different from the one I used to perform the conversion. This Forum auto-conversion resulted in a rendition of the magenta swatch that is significantly brighter and a bit less saturated, which may now be more apparent if you compare the two sets once again, now that I pointed this difference out, and I will once again remind you that whatever differences you see, if any, are highly dependent on the state and capability of your viewing device, because the differences are quite subtle. If you are interested in the details that are considered and tradeoffs that are made when selecting and using one of the more common color space conversion approaches, I would highly recommend the following layman-level article that explains the process of conversion and the various approaches that are commonly employed: Cambridge in Color - COLOR SPACE CONVERSION.
  12. Yes, some sort of formula compiler would definitely come in handy and help cut down on "math-op node bloat."
  13. There is answer need no question that to makes sense no, thanks for Google Translate cooperation please you post. ..., in an effort to get the point we are making across to you, prior to just "plonking" you outright with an ignore. Do you know what's worse than a completely incoherent non-native English speaker who can at least admit to being at that level? A completely incoherent non-native English speaker who thinks that their incoherency is the fault of (all of the) listeners, rather than in their inability to form coherent self-consistent English phrases and meaningful unambiguous sentences. Here is a link to a poem that you should definitely read - it very much applies to you. Maybe after reading it, you will come to appreciate the point we are trying to get across and act accordingly: https://stihi.ru/2018/02/28/1470
  14. Perhaps unfortunately, but I do not have a blog or other site to put this info on. I will see if the Admins are willing to allow for some sort of pinning, tagging, or other mechanism, to be able to get tutorials into a place where they are readily available (and visible, regardless of the age of the posts), going forward.
  15. Let me make a constructive suggestion: Please use Google translate to translate from from whatever native language you speak to English. Your English is extremely difficult to follow and that is not a good starting point for getting an answer to your question. Having to undertake the mental load of trying to understand your English is way more than any of us have the patience for. I am just trying to be helpful, even though statements like: "You just don`t know an answer.", are an assumption on your part, because I can practically guarantee that people on this forum definitely do know "an" answer to most of the questions that get asked, yours included.
  16. In using our script on the artistic reinterpretation of the Platonic, above, we discover that we run into several pathological scenarios. One of these is the following child segment of our original composite Spline is it looks in isolation, prior to being split out into its own Spline object and having its segment closed properly by our Python script: Before we continue analyzing this Spline segment, let me add a tangential but important aside. If this tutorial had been on YouTube (and elsewhere), this is how the spline would be presented to you, immediately followed by discussion of the issues present. In my opinion, it is nearly impossible to reason about a spline as presented above, so I will take several liberties with how the spline is presented. These are as follows and if you are not interested in the process of properly presenting a problem, feel free to skip to the Discussion section, below, at this time. First I will enter point mode, so that the directionality of the spline becomes more apparent, even if marginally so (and this one time, I am not being cynical - stay tuned): I could stop here, but I believe that even in this enhanced view, the Spline object is still very hard to reason about. Therefore, I will customize our view, as follows: First of all, I will modify the Spline colors transitioning along its length from the Cinema 4D defaults of a bright pale blue sky color at the start of a Spline to a more saturated, but darker azure color at its tail end. Instead of this (default) color combination, I will leverage all of the information and color theory I know to guide me in choosing better colors for the two ends of what amounts to a gradient that is limited to two colors, one at each end of the spline's curve. To maximally leverage the very limited number of color stops that define the gradient of the Spline, I will choose two bright saturated complementary colors. This will force the central color of the gradient to be a neutral gray, as the two complementary colors cancel each other out. Using primary colors in combination with their complements in the form of secondary colors (in an [additive] RGB color space) there are three possibilities that immediately come to mind: - Red and its complement Cyan - Green and its complement Magenta - Blue and its complement Yellow The blue/yellow combination is a non-starter, because our eyes are very insensitive to the color blue. That leaves us with the first two options, which are largely just a subjective matter of taste and I will go with the colors Green and its complement Magenta. With this change made to the preferences, let's take another look at our spline: Somewhat arbitrarily, I have decided to swap the two colors and have our Spline transition from magenta to green, rather than vice versa. In other words magenta will be the color at the starting point of the spline and green the color at its end point. Because, we used complementary colors, the middle portion of the spline is a bright neutral gray, providing us with another point of distinction from the two saturated colors at the endpoints, almost as if we had a three color stop gradient at our disposal, rather than the severely limited one made of only two color stops. Commit this technique to memory, since you may find it useful when facing similar gradient restrictions while trying to come up with a solution to a potentially unrelated color choice issue. I will take one final step to enhance the display of our Spline further by adding point indices and directional triangles that may further aid understanding and intuition, both in terms of the layout of the Spline's points as well as the directionality of the Spline at those points: I used a bright blue color for the directional arrowheads, to distinguish them from the colors making up the Spline's gradient. Unfortunately, with regard to the point indices which I had Cinema 4D generate through the use of the same Cloner object that positioned the arrowheads at the Spline object's points, well, I have zero control over them. This makes the indices somewhat difficult to read at coincident points of the spline - the very topic I want to tackle in the rest of this post. Discussion As a starting point, the top of our (non-closed initial) Spline object has three coincident points. These have indices 0, 10, and 20, respectively, which match the starting, middle, and ending point indices of our twenty-one point spline (segment). We will ignore the middle point of the spline for the remainder of this discussion, since the resolution to that is both subjective and merits its own lengthy deliberation at some other time. Looking at the start and end points, Point #0 and Point #20, we notice that they form a corner. Now, if I was to take the Python script as I presented it and apply it to this spline, it would remove Point #20, since it is coincident with Point #0, and change the Spline to be closed. Unfortunately, this would also alter the topology of this Spline, as follows: As you can see, the corner point gets replaced with a curve and the tail end of the spline now overshoots, before making its way back to the head end. What we would like to have seen, instead, is the following: As the more experienced amongst you will realize, the issue involves the tangents associated with the starting and ending points of the spline. When the Python script does its magic and removes the end point of a spline segment that is not properly closed, it must consider, before the removal, the orientation of the tangents of the start and end points, and do its best to preserve the Spline segment's topology over the course of removing its last point and closing it. How to achieve this goal, my friends, will be the subject of our next lesson.
  17. Yes, but notice that what is happening is that the cut is being made as a projection of the spline in the direction of the camera's Z axis. This "logically" performs an extrusion of the spline in that direction through the plane, to find the points at which to cut it. Of course this is happening behind the scenes inside of Cinema 4D itself and bear in mind that it is not a very precise way of going about making a cut (i.e., cutting it in the direction of the current view, with all of the perspective distortion that entails).
  18. Slicing geometry with a spline is a difficult operation and a very large topic. This is primarily due to the fact that a spline is a curve with no width and no height, so the idea of slicing is quite foreign. I know visually, one would like to pretend, as is the case for this scene, that the spline is extruded vertically and cuts the plane in that sense. But as powerful as our ability, as humans, to imagine is, we have to find ways of expressing our ideas mathematically in a watertight way to the computer (application). For an issue similar to yours when using splines to alter geometry, I would encourage you to look at the example presented at the following link, as well as to read the accompanying commentary on the subject: Condition that generator treats inner spline segment as a hole
  19. I would like to present an intermediate level C4D/Python script based lesson in the form of a solution to a cumbersome task, were it to be performed via the Cinema 4D UI manually. The script takes a source multi-segment Spline object and splits it into single-segment Spline child objects that are far more amenable to further editing via the Cinema 4D UI. At least some fundamental (i.e., beginner/intermediate level) knowledge of Python and Cinema 4D would be a good idea as a pre-requisite, prior to embarking on understanding a script at this level of complexity. I don't mean to scare anyone off, but be prepared to learn along the way if you are not starting from at least a beginner Python/C4D level of baseline knowledge. The goal here is not to teach introductory Python programming or introductory Cinema 4D modeling, but rather to teach how Python can be used to automate tasks that can be cumbersome or nearly impossible were they to be performed manually via the Cinema 4D GUI. Depending on the interest for C4D/Python instruction at this intermediate level, the perceived value of instruction presented in this format, as well as the usefulness of the ideas presented and how said ideas can be implemented in C4D/Python, this may become the first lesson of a series of similar lessons that I will post on the forum going forward. This instruction may evolve in terms of both format and content, based on the comments/interests of Forum users that find this educational information interesting, instructional, and ultimately useful [i.e., of value in terms of enhancing their C4D/Python skillset]). The source code presented can be dropped into the Cinema 4D Script Manager to perform the operation described, which can be likened to a command. More importantly, this source code, unlike almost all of the other C4D Python source code you will find "in the wild," is fully commented, teaching you all of the steps involved in accomplishing this task, the issues one may encounter, and how said issues can be overcome with a combination of out-of-the-(bounding)-box thinking and a good helping of "elbow grease." A little bit of humor and cynicism is thrown in for good measure, in an effort to make the learning experience more pleasant and the techniques presented more memorable. The script does contain a reasonable level of error checking, with almost all errors being raised as exceptions. In the event that one occurs, you can see informative text describing the nature of the error, using the C4D Python Console which you can bring up via the following steps: - From the Main Menu, select Extensions / Console... - When the Console window pops up, make sure that Python is selected in the left pane to show Python output and allow for command entry into the terminal in the right pane of the Console Window. When the Script is executed, any error messages that get raised as exceptions by the script will be outputted to this Python terminal window by Cinema 4D. A brief note on style: I've used two spaces rather than four for indentation purposes, because some of the lines of code, especially with all of the comments I sprinkled in, are already pretty long and I did not want to increase the line count any further. I also did not break the code up into functions, keeping the logic flow fairly linear, for instructional purposes. If this code is integrated into a larger project, it would behoove the developer using it to wrap the code into classes and member functions, etc. Otherwise, feel free to use the code in whole or part as you see fit. I make no guarantees about its validity or applicability for any purpose whatsoever. It is only meant to be instructional in nature and no more. Pre-requisites: - A relatively modern version of Cinema 4D (at least one with Python 3.0 and proper support for typing) - In terms of the object to perform the operation on: A single Spline object should be selected that is a non-parametric Spline object made up of two or more segments Here is a set of steps you can use to create a multi-segment Spline object to test the functionality presented in the script: - Create several parametric splines (e.g., a Rectangle, an n-Side, and a Flower). - Select all of these Spline Objects - Issue a "Connect Object and Delete" command to combine them all into a single multi-segment bearing Spline object Have fun and will perhaps post a sample scene and the effects of the script in the not too distant future. Also, if you discover any bugs, please point them out and I will correct them in the code. The full Python script is below (to run this script, create a new script in Script Manager and use the following code to replace the boiler-plate code that normally gets generated for a new script): Let me put our script to practice with a sample scene. Let's start with a Platonic object: Next, we'll make the Platonic editable, go into Edge Mode, select all of the (former parametric) Platonic's edges, and finally, issue the Edge to Spline command. The result looks as follows: Now, we will select our newly created (composite) Spline and its Type property to Bezier, leaving the remaining properties unaltered from their defaults. This change will produce a far more interesting artistic reinterpretation of the original Spline, as shown below: This complex spline is composed of multiple segments - eleven, to be exact, but who's counting: ..., and that, my friends, will provide us with an interesting source Spline object to apply our script to, as we attempt to separate all eleven of the component Spline segments that make up this composite Spline into separate single segment Spline objects of their own.
  20. Put up a sample scene, because we still do not understand your English
  21. Given that the present number of good, lengthy, and multi-topic Intermediate Level Cinema 4D videos that: - Are not more than a decade old, stemming from Cinema's early hay days - Were produced by people that actually understand the topics presented themselves, before trying to educate others on these topics (and not just playing the, "I've been using C4D since 1892," card, only to show their ignorance as they attempt to teach) - Are capable of providing quality instruction, a skill that is far more rare than most believe, having never come across it in school, work, or life in general ..., can best be represented by a big fat 0, this is a very VERY welcome addition.
  22. I just realized that scrolling through my high-density circle images on the previous page is a good test for vertical moving inversion artifacts in your LCD monitor. Additional references, if you want to do some proper tests: https://www.testufo.com/inversion http://www.lagom.nl/lcd-test/inversion.php#invpattern (Try both with steady state (i.e., fixed image on screen) and also vertically scrolling while all of the test samples are visible on the screen.
  23. Yikes, that one ⇪⇪ in particular makes my mathematically inclined zero-curvature Euclidean coordinated 3D brain hurt! There's no possible rectangle or circle that deceivingly seeming 2D monstrosity can fit into. I am guessing that what is being portrayed would only be possible on (at least) a 3D spherical surface that is being portrayed as some sort of planar projection of the sphere's surface - just a guess, since my brain "does not fully compute" what is happening there.
  24. I'll post the final Python code for this once I run out of sensible improvements.
  25. It may appear in the above image that some of the circles come into contact, but I scrutinized the result very carefully at a higher zoom level and the two selected circles in the following image are the closest ones I was able to find after my "thorough visual inspection" 🤓 (would be easier to see if the Forum software didn't "have its way" with the .png images that get attached, causing blurring, aliasing, dithering, and other sorts of issues, probably due to internal image resizing and maybe JPG-ification, as well):
×
×
  • Create New...