Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 03/18/2025 in Posts

  1. There are some very interesting new features. Particles have an extensive list of new functionalities from which the following stand out. The Field-driven density distribution of particles is highly appreciated along with in-built noise shaders for both distribution and emission. Density control shaders and Fields is something I've been asking for MoGraph for years. Finally it can be achieved through particles. Noise Sampling for Color and Data Mapper https://help.maxon.net/c4d/en-us/Content/Resources/Images/2025-2_Particles_NoiseSampling_01.mp4 https://help.maxon.net/c4d/en-us/Content/Resources/Images/2025-2_Particles_NoiseSampling_02.mp4 Neighbor Search algorithm for Flock, Predator Prey and Blending Similar to the geometry density coloring effect I've been asking. Some Scene Node Capsules are now directly available with other primitives from drop-down menus. This was something I was actively arguing about from early versions of Scene Nodes as it was a closely UX-related issue. Things were pointing to parallel ecosystems of the same tools being developed under the same application or tools that were not that easily accessible that would inevitably lead to confusion or frustration among new and old users. It wasn't too long ago you could finally make capsules accessible from custom pallets but that required from users to know how to do that and as SceneNodes was so actively characterized as experimental or system for advanced users, people would prefer staying away from it missing some key features that were not that hard to use after all. Personally I extend my custom Layout with capsules supported by the OM every new version. Now the Line Spline is available along with the Break Spline, Branch Spline (which is an easier MoSpline Turtle), Catenary Spline, Dash Spline, Electric Spline, Partition and Pulse Spline. What I don't know is if they re-developed those tools in C++ or if they are just links to the node implementation with modernized icons to fit with the rest of UI. Well, there are still some argument about what should be characterized as a Generator what as a Modifier and what as a Deformer... For example the Branch Spline is more fitted to the Generators club as it does create additional splines on top of the original but I guess they run out of ideas on how to represent it with a new icon to avoid confusion with the MoSpline... Which leads to other old arrangement arguments like why don't we just have new features part of older tools as modes... For example the Branch Spline as Mode of the MoSpline... Break, Dash, Partition and Pulse modes of a single spline deformer/modifier or Wrap, Shrink Wrap and Spherify as modes of a single deformer... Looking forward to seeing all distribution nodes as modes in the Cloner and Blue Noise as a mode of the Push Apart Effector. New mode for the Look at Camera expression I always thought this to be an old remnant from earlier C4D versions... I never used it because I achieved the exact same effect using the Target tag... I still don't know what the difference between them is... Some unfortunate translation issues: Weird title, not available in English (fixed the next day) https://help.maxon.net/c4d/en-us/Default.htm#html/OFPBLEND-FP_BLEND_OBJECT_OUTPUT.html MAXON ONE capsules are not documented... Bad marketing as other users don't know what they are missing Things that we saw in the teaser video but they are not documented (yet) Constellation Generator (Plexus effect) Liquid Simulator (yeah... the thing that should be teased the most was not)
    4 points
  2. Maxon Unveils Game-Changing Cinema 4D 2025 Update: Enhanced Modeling, Texturing, and Scene Nodes March 31, 2025 – Los Angeles, CA – Maxon has officially announced the highly anticipated Cinema 4D 2025.1.4 update, promising groundbreaking new features that will redefine the 3D animation and motion graphics industry. This latest iteration of Cinema 4D introduces significant improvements to modeling, texturing, Scene Nodes, and animation workflows, ensuring a more seamless experience for artists. Official Statement from Maxon’s CEO “Cinema 4D 2025.1.4 is not just an update—it’s a revolution. We’ve listened to our users and implemented features that streamline workflows, accelerate creativity, and push the boundaries of what’s possible in 3D design. Our latest enhancements to modeling, texturing, and Scene Nodes will empower artists like never before.” — David McGavran, CEO of Maxon Key Features of Cinema 4D 2025.1.4: 🔹 Enhanced Parametric Modeling Tools – New and improved parametric objects offer greater control, including advanced spline editing and interactive beveling. 🔹 Advanced Scene Nodes 2.0 – A major update to Scene Nodes introduces a more intuitive interface, expanded procedural modeling options, and real-time performance boosts. 🔹 Improved UV Packing and Unwrapping – A completely reworked UV workflow includes automated packing, distortion minimization, and island grouping for better texturing efficiency. 🔹 Neural Render Engine (NRE) – Cinema 4D now features an AI-powered render engine that reduces render times by up to 90% while maintaining photorealistic quality. 🔹 Real-Time Path Tracing – A fully integrated real-time path tracer allows users to see near-final renders directly in the viewport. 🔹 Auto-Rigging AI – Character animation just got easier with an intelligent rigging system that auto-detects joints and optimizes weights instantly. 🔹 HoloC4D VR Integration – For the first time, users can sculpt, animate, and render in a fully immersive VR environment. 🔹 Redshift Cloud Render – A new cloud-based rendering system allows users to offload heavy render jobs and access high-performance GPU farms directly from Cinema 4D. 🔹 Deepfake MoGraph Generator – A new AI-assisted tool that generates realistic face animations from a single image input. 🔹 AI-Assisted Material Suggestion – The new AI-driven material engine suggests realistic textures and shader settings based on your scene context. Spline SDF Smooh blending of 2D splines. Outline Capsule Unlimited number of oulines with the new Outline Node and capsule Chamferer The new Chamferer generator brings non-destructive, parametric editing on individual spline control points Field Distribution Cloner Mode Fields can now be used to place children instances by mapping the hierarchy of the children to the intensity of the field. Release Date and Availability Cinema 4D 2025.1.4 will roll out as a free update for all Maxon One subscribers starting April 1, 2025. Perpetual license holders will have the option to upgrade at a discounted rate. For more details, visit www.maxon.net. Exclusive Interview with Maxon’s CEO, David McGavran Q: What was the main focus for Cinema 4D 2025.1.4? David McGavran: “Our primary goal was to refine and enhance the tools that artists use daily. We wanted to create a more intuitive and efficient experience, whether it's through procedural modeling, advanced texturing, or improved rendering. The new Scene Nodes 2.0 is a huge leap forward, allowing artists to build complex structures with ease.” Q: How does this update compare to previous ones? David McGavran: “While every update brings innovations, this one focuses on practical enhancements that improve workflow speed and creative flexibility. We've also made significant improvements to UV workflows, Boolean modeling, and dynamic simulations based on user feedback.” Q: Can you give us a sneak peek into the future of Cinema 4D? David McGavran: “Absolutely. We are already developing features for Cinema 4D 2026 that will push procedural modeling even further. Expect deeper Redshift integration, better scene organization tools, and an overhaul of particle simulations. We're also exploring more real-time collaboration features to make remote workflows even smoother.”
    4 points
  3. New around here and back in C4D land after a VERY long break. I was an avid Modo user and also I'm a former Foundry employee from back around the time of the acquisition 😅 although I was a designer, not a product manager but I do still have a bunch of friends there and sadly when development stopped, some of my friends got laid off. The TL;DR of why modo probably isn't around is its small foothold in professional markets. It's the best modeller out there, nothing else even comes close. There's one or two people in most VFX studios and they're generally very happy and very productive. But Modo in its early days had a huge number of hobbyist users. Post buyout, they started leaving in droves, in part initially due to the Foundry wanting to push it further into VFX (Which didn't work) and then they pivoted to a number of product design contracts and focused a lot of development there. The hobbyists initially frustrated and then priced out, they left mostly for blender. A lot of professional users stuck around, but those relationships often became more strained over time. Modo was never the most stable application and it's had a rocky history with stability depending on release. Some might bring some awesome features, but it also might crash 10 times a day. A lot of them left too. In terms of features though, there's a lot of incorrect info out there about Modo - it had a rep for only being a modeller, which just wasn't true, it had a bunch of awesome features: Dynamics, some pretty decent mograph/replication tools, sculpting and painting (About on par with boy paint FWIW), a kickass (But CPU only) renderer, super intuitive material workflow, some fairly decent animation tools and some crazy customisation capabilities. But it was very much a jack of all trades and none of those aspects were strong enough compared to the modelling. It was either "it only does modelling right?" or "Every feature apart from modelling sucked" the truth as always was somewhere in the middle. It covered a lot of the same ground as Blender, with a lot of the same weaknesses - it just cost much more 😂 I imagine unless you were a serious modeller and super committed it became a very hard sell. I've now made the decision to move the bulk of my work back to C4D, as I'm not a huge fan of Blender and hate Autodesk with a passion. I've always had a soft spot for C4D and it was my main package for about 5 years before Modo. I'm not a super serious modeller and most of what I do is product shots and mograph adjacent things. I'll still be doing my modelling in Modo for the foreseeable though, they've issued a 10 year EOL license that anyone can get ahold of. My commercial license runs out next month. ...that was quite a long TLDR, but someone might find it interesting IDK.
    4 points
  4. Well.... I've asked (and still asking) for a Helix 2.0 with myltiple modes like Logarithmic. Archimedian and Double but primitives won't ever change in fear of compatibility break with older projects. But you could try it just on a Symmetry Object ...
    3 points
  5. I think you may have this backwards. In plain language: Straight alpha: Stores the full RGB colour for each pixel, but ignores how transparent it may or may not be. ie the transparency of a pixel has no impact on the colour stored. This means for example if you render a white cloud with a soft whispy edge in a blue sky, the rendered cloud will only contain white cloud colours, the blue of the sky will not be present in the rendered cloud, even where the alpha transparency eats into the cloud. Premultiplied: This simply means the image being rendered has already had the background baked into the rendered colour data. In the cloud example it means that it will start to turn blue as the edge of the cloud becomes more transparent. In practical terms, straight alphas can be great because there's no bleeding of the background into the visual RGB data, you can take your white cloud and throw it onto any background you like, there wont be any blue from the sky creeping in. On the other hand If you place your premultiplied cloud onto an orange sunset background, youll get a blue halo around the cloud, which sucks. However.... It isn't all roses. Sometimes you need the background colour to be baked into the transparent edge because some things are just flat out impossible to render due to the number of layers present or the motion on screen. Here's one which screws me over regularly; what happens if I have a 100% transparent plastic fan blade, but the fan blade is frosted. And in the middle of the fan is a bright red light. Visually the fan blade has the effect of looking like a swooshing darth vader light sabre. Its bright red and burning white from the brightness, but whats there? a 100% transparent object.... The alpha channel with a straight alpha will obliterate my rendering, its 100% transparent plastic. You can see it, but the alpha channel decides to ruin your day and now the rendering is useless. The only option here is a premultiplied alpha where the background is baked into the motion blur and SSS of the plastic fan blade. Sure, I need to make sure my 3d background somewhat matches my intended compositing background, but its the only way to get any sort of useful render. Same goes for motion blur, DOF blur, multiple transparent layers in front of each other (steam behind glass) The honest answer is, use whichever one is least likely to screw you over. If you have lots of annoying transparent/blurry things to deal with, go premultiplied but plan your background ahead of time. If you want clean alphas on a simpler object render, go straight alpha. I haven't read your linked blog all the way through, but I will say... there are an abundance of wrong people loudly proclaiming themselves to be fonts of all knowledge. Theres one in the octane community who insists on inserting himself into literally every thread on the entire octane forum to tell you youre an idiot for using a png file, he has 100's of blog pages which are a strange mix between 3d rendering and flat earth magical woo woo maths to show everyone just how right he is. That said, your rainbow example does match up with what the blog says. The only difference is the blog seems to think the straight alpha is evil and you should only use the premultiplied, whilst I would say both have their uses, with straight being preferable when possible.
    2 points
  6. Hey Jeff - Maxon is recording the NAB presentations and plans to post them 1-2 weeks after the show.
    2 points
  7. There are, if we look carefully, a number of things subtly wrong with the settings in this scene. As HP predicts above, Render Perfect on the sphere is one of them, and fixes the effect of collision deformer not showing in render if we turn that off. Next we had some suspect settings in the sweep object (parallel movement should be off, Banking should be ON, and End Rotation should be 0 degrees), which fixes the twist in your path spline over the indent, and restores it to correct, contiguous circular form all the way along its length... The mode of the Collision Deformer should be Outside (volume) and the capsule should ideally not be a child of it, though it doesn't particularly matter in this scene. Likewise object order in the OM could be better bearing in mind that Cinema scans downwards through it from the top. Below is more ideal I would say, and general good practice. Next, the curvature in your path spline within the sweep isn't quite matching the curve of the indent in the sphere, resulting in some unevenness in that area. This can be fixed by getting a new path spline for the sweep, which we get by doing Current State to Object on the main sphere (once collided) and then Edge to spline on the centre edge loop of that, but first we have to fix the collision object, which should also ideally be a sphere so that we can choose Hexa type and thereby avoid the complex pole on the end of the capsule it replaces, which was previously confusing the collision deformer, and producing some wanky / uneven collision vertices in the main sphere at the apex of the collision point, which would obviously effect any spline you subsequently derived from it. In my version I wanted a better, higher resolution sphere, but not one that caused the collision deformer any extra work, so changed the type of that sphere to Hexa as well, which then necessitated addition of a Spherify deformer before the Collision Deformer (because hexaspheres are not mathematically spherical out of the gate). And then lastly that went under SDS to give us improved resolution, and my Current State to Object was performed on the SDS instead of the sphere itself, for maximum spline matching. Anyway, I have fixed all that in the scene attached below... groove-on-sphere CBR Fix.c4d CBR
    2 points
  8. Fun facts: The text was AI generated. I did not expect a reference to the CEO by name. Later I asked for an interview. It wasn't an elaborate prompting at all, I asked three times all and all. I was clear of my intensions so it knew it was supposed to be an April Fools article. I asked for C4D screenshots for the next version... It kept drawing Blender elements on them. Curiously there's a repeating pattern that resembles a mix of the old logo of the Blue Pearl with the splash screen of R21 (see last two images) I removed the AI image from the article the next day just because it was too cursed... I later thought I could make the YT image to link to a Rick Roll video but it was too late. The Rocket Lasso image is a manipulated version of the S26 I asked for a new C4D logo but it refused, there must be a limit of 3 generated images per day I guess. The Chamferer icon is the CV-Chamfer icon. I designed the Spline SFD icon by templating the Spline Mask icon. I planned this prank 4 days in advance. Spline SFD, Chamferer, Outliner and Field Distribution are all real but not as versatile to construct as they are presented. I did not expect the Spline SFD to be that fast in viewport. The Field Distribution was the most time consuming as I had to construct an XPresso rig to manipulate 4 coencentric Torus Fields from the radius of the visible Spherical Field... real pain in the a$$ and didn't manage to link the Inner Offset to the radius of the inner most Torus to make it more believable. Fortunately I didn't have to spend extra time to recreate the UI of the Cloner as I had already done this a week earlier as a future sugggestion for MAXON.
    2 points
  9. Plugin should be needs only if you want some "parametric" setup. If you want to create just static object, internal command edge to spline is enough. For example. 1, Create cube 100*100*100 2, Add cloner (grid mode, 3*3*3 with size 100*100*100) result is 27 cubes perfectly lay each other and "share" the same edges 3, rightclick on Cloner/ Current State to Object 4, middleclick on newly created cloner null object 5, rightclick / Connect Objects and Delete with first step you convert parametric cloner into separate geometries, with second you´ll select all newly created geometries and with last step you make just single polygonal object which contain also "inner" cage (all edges from single cubes are still there) 6, select newly created single object, rightclick / Optimize 7, switch to edge mode, select all edges, rightclick / edge to spline 8, drag out spline from child position of geometry object 9, hide everything instead of spline to see nice spline cage in viewport In these steps you optimize mesh, select all edges (Ctrl+A in viewport in edge mode) and create spline cage 10, Add connect object as parent of spline cage 11, Add FFD deformer as child of spline cage (use fit to parent command to adjust FFD size exactly to spline cage size) 12, In point mode with selected FFD deformer select all FFD deformer points and deselect all 8 corner points only 13, Scale them to your needs. Connect object make sure/optimize spline cage as whole, not all spline segments separated. FFD cover/influence whole spline cage and with selected control points and scale deform spline cage as you wish. you could see deformed spline cage is "linear". Select spline cage object in Object manager and change intermediate points from Adaptive to Uniform. (You could also adjust number of intermediate points if deformed cage is not as smooth as you want) timespace.c4d
    2 points
  10. Dear members We are happy to announce brand new Youtube channel with focus on C4D node system. In our research we realized that training content for nodes is very sparse and lacking in quality and depth. We already have two lessons available which will be part of ongoing series. Humble request from our side is that, even if you are not into nodes, please subscribe in order that the channel grows which will enable us to monetize it down the road. On longer timeline this can enable us to reduce the subscription period or even remove it alltogether and open up forum for many more artists. https://www.youtube.com/@CORE4D Thank you and enjoy the content! P.S. Any member willing to contribute to the channel is more than welcome - drop us a message : )
    2 points
  11. @b_ewers 1. Change the Maxon Noise > Input > Source to UV/Vertex Attribute, so that the noise samples in 2D Texture Coordinate (UV) space, rather than 3D space. 2. Significantly reduce the overall scale from 100 to something like 1.2 3. Adjust the relative scale to something like 1, 20, 1 to get the vertical streaking. 4. Increase contrast to 1 5. Change the noise type to Turbulence uv-space-noise_v01.c4d
    1 point
  12. I was absolutely perplexed by something that seemed so simple at first. Later on, I acquired some mental trauma from tracking down a particularly nasty bug around alpha and PNGs. You'd probably not be surprised that once one becomes more or less familiar with the nuances around alpha channel handling, that nuanced bugs can crop up in even the most robust software. So... yes. 🤣 This is where I encourage folks to gain enough confidence in their own, hopefully well researched, understanding. Eventually, it enables folks to identify where the specific problem is emerging. The skull demonstration has zero alpha regions within the "flame" portions. Following the One True Alpha formula, the operation should be adding the emission component to the unoccluded plate code values. There is no "scaling by proportion of occlusion" to the plate that is "under", as indicated by the alpha being zero. The author has some expanded commentary over on the issue tracker. The following diagram loosely shows how the incoming energy, the sum of the green and pink arrows, yields a direct "additive" component in the green arrow, and the remaining energy indicated by the pink arrow, scaled by whatever is "removed" in that additive component, is then passed down the stack. If this seems peculiar, the next time you are looking out of a window, look at the reflections "on" the window. They are not occluding in any way! Similar things occur with many, many other phenomena of course. For example, in the case of burning material from a candle, the particulate is effectively close to zero occlusion so as to be zero. Not quite identical to the reflection or gloss examples, but suitable enough for a reasonable demonstration. Sadly, Adobe is a complete and utter failure on the subject of alpha for many, many years. If you crawl over the OpenImageIO mailing list and repository, you will find all sorts of references as to how Adobe is mishandling alpha. Adobe's alpha handling is likely a byproduct of tech debt at this point, culminating with a who's who of image folks over in the infamous Adobe Thread. Zap makes a reference to the debacle in yet-another-thread-about-Adobe-and-Alpha here. Gritz, in the prior post, makes reference to this problem: You can probably read between the lines of Alan's video at this point. So this is two problems, one of which I've been specifically chasing for three decades, and one that relates to alpha. As for the alpha problem, I cannot speak directly to AfterEffects as I am unfamiliar with it, but the folks I have spoken with said the last time they attempted, Adobe still does not properly handle alpha. I'd suggest testing it in some other software for compositing just to verify the veracity of the claims folks like myself make, such as Nuke non-commercial, Fusion, or even Blender. All three of those should work as expected. My trust that Adobe will doing anything "correct" at this point, is close to zero. As for "colour management"... that is another rabbit hole well worth investigating, although it's probably easier to find a leprechaun than pin down what "colour management" in relation to picture authorship means to some people or organizations. Keeping a well researched and reasoned skepticism in mind in all of these pursuits is key. 🤣
    1 point
  13. Faking it can be done with tutorial above using multitude of tools you have at disposal. I think it would be quite interesting to have a physically correct rig for this 🙂
    1 point
  14. I haven't done this myself so far, but I suspect the best way to go here might involve cycloid splines, blend mode cloners and Field Forces. Searching around those terms I found this tutorial from Insydium. Obviously, they are using their own particle system, but there is no reason to think the native one couldn't also do it... though I am probably not the best person to advise on the specifics of that; I don't have much experience in Cinema particles yet ! CBR
    1 point
  15. Apologies... I can only post one post per day. Probably better that way... 🤣 The issue, as the linked video in the previous post gets into, is sadly nuanced. The correct math within the tristimulus system of RGB emissions is remarkably simple. Sadly, it is just that software and formats are less than optimal. The dependencies of the "simple compositing operation" are: 1. Software employed. Many pieces of software are hugely problematic. See the infamous Adobe thread as a good example. 2. File formats employed. Some file encodings cannot support the proper One True Alpha, to borrow Mr. Gritz's turn of phrase, such as PNG. 3. Data state within software. Even if we are applying the proper mathematical operation to the data, if the data state is incorrect, incorrect results will emerge. The short answer is that if we have a generic EXR, the RGB emission samples are likely normatively encoded as linear with respect to normalized wattages, and encoded as gained with respect to geometric occlusion. That is, in most cases, the EXR data state is often ready for compositing. If your example had a reflection off of glass, a satin or glossy effect, a flare or glare or glow, a volumetric air material or emissive gas, etc., you'd be ready to simply composite using the One True Alpha formula, for the RGB resultant emissions only^1: A.RGB_Emission + ((100% - A.Alpha_Occlusion) * B.RGB_Emission) Your cube glow is no different to any of the other conventional energy transport phenomena outlined above, so it would "Just Work". If however, the software or the encoding is broken in some way, then all bets are off. That's where the video mentions that the only way to work through these problems is by way of understanding. Remember that the geometry is implicitly defined in the sample. In terms of a "plate", the plate that the A is being composited "over" simply lists the RGB emission, which may be code value zero. As such, according to the above formula, your red cube RGB emission sample of gloss or glow or volumetric would simply be added to the "under" plate. The key takeaway is that all RGB emissions always carry an implicit spatial and geometric set of assumptions. This should never happen in well behaved encodings and software. If it does, there's an error in the chain! JKierbel created a nice little test EXR to see if your software is behaving poorly. Hope this helps to try and clear up a bit of the problem surface. See you in another 24 hours if required... 🤣 -- 1. The example comes with a caveat that the "geometry" of the sample is "uncorrelated". For "correlated" geometry, like a puzzle piece perfectly aligned with another puzzle piece, like a holdout matte, the formula shifts slightly. The formula employed is a variation of the generic "probability" formula as Jeremy Selan explains in this linked post. If we expand the formula, we end up with the exact alpha over additive formula above. It should be noted that the multiplicative component is actually a scaling of the stimuli, based on energy per unit area. A more "full fledged" version of the energy transport math was offered up by Yule and (often misspelled) Neilsen which accounts for the nature of the energy transport in relation to the multiplicative mechanisms of absorption attenuation, as well as the more generic additive component of the energy.
    1 point
  16. Hello all. Flattered to be mentioned here. I just wanted to point out that the statement is not quite correct; the result will indeed include an emission component in the RGB after the composite. With associated alpha (aka “premultiplied”) the emission is directly added to the result of the occlusion. This happens to be the only way alpha is generated in PBR-like rendering systems, and is more or less the closest normative case of a reasonable model of transparency and emission. It also happens to be the sole way to encode any additive component like a gloss, a flare, a glare, fires or emissive gases, glows, etc. I’ve spent quite a few years now trying to accumulate some quotations and such on the subject, from luminaries in the field. Hope this helps.
    1 point
  17. From what I understand from all this is that the premultiplied method does not keep a separate Alpha map of the image. This means that it encodes the Color and Alpha channels on the same bitmap, hence the name as the process of embedding the alpha map has already been done before previewing/opening the image. The straight one is like the raw data. You have two separate data structures, the color and alpha separate. It gets multiplied to process/view correctly by multiplying the corresponding values after the fact. With this you don't have to extract the alpha map if you need it as it is already available.
    1 point
  18. Asked ChatGPT, gave this answer R: 255, G: 0, B: 0, A: 0 R: 0, G: 0, B: 0, A: 0
    1 point
  19. The Classic - Carpet roll! 120_Carpet_Roll(MG+XP).c4d
    1 point
  20. If I were in this situation, I would disconnect my machine from the internet (kill wifi, pull ethernet), because then theres a good chance it will simply timeout after 30 seconds.
    1 point
  21. I haven't watched the series yet myself, but that title sequence is wonderful. And the music is as harmonically interesting as the visuals and techniques are to us CG guys. Perfect alternating consonance and dissonance. For those of you who have an interest in such things, here's Charles Cornell to explain why that's also great ! CBR
    1 point
  22. @Cerbera That's perfect! Thank you very much! Eudes
    1 point
  23. I started in on the OpenPBR material with the new Redshift. I was surprised to see it in there till I saw that Autodesk also included it in Arnold this last release as well. The future is now... OpenPBR. Saul Espinosa says it's the only shader he uses now (RS + Houdini). Subsequently, I just sent a ticket to maxon to have it included in the default material dropdown.
    1 point
  24. I am afraid that you need to force quit to stop the process : /
    1 point
  25. You can reference into graph as classic object. Geometry port is under "op" group port. To import use ctrl+drag from object manager into graph, that will create a copy of geometry in the graph itself.
    1 point
  26. @DasFrodo You are getting just geometry from the cubes without transforms (matrix), transforms are needed.
    1 point
  27. You may find some example for iteration in the capsule file pit thread. If the link works, it will get you there + to a comment where more links to resources are mentioned. Great place for better understanding of Capsules + Scene Nodes. 🙂
    1 point
  28. Use as child Connect object and put all geometry objects there. Or use objects links input (drag objects in OM which you want use into objects links panel), decompose it and transform geometry with iterations of geometries and matrices to clone onto points node. But I think first solution is more understand-able 🙂
    1 point
  29. Hi @bezo, thank you so much for the incredibly detailed guide! I finally got it to work by following your step-by-step instructions. Your explanation was essential for understanding the process. When you say "create just a static object," does that mean it's not possible to animate the deformation? Is there no way to achieve this using an animatable parameter of the FFD? If not, would there be another way to animate this deformation, perhaps using a Spherical Field? Thanks again for your help! Eudes
    1 point
  30. That can be a feasible (and very sensible) way to start when you need a very specific (and especially numerically based) corner radii, but what you are actually doing there is defeating the point of SDS in 2 ways; firstly you are forcing yourself right from the outset to be at much much higher density than you need (which is rather contrary to the whole concept of live subdivision !), and secondly by manually / unnecessarily doing work that should be left to SDS itself. So we actually want very minimal topology at the corners, and should be using the type of corner loop termination (box corners vs inset style ones) and distance of the neighbouring control loops to the main corner vertices to directly control the rounding on each corner, rather than defining it in topology. There ARE times when you NEED to define corners in topology, but this is not one of them, primarily because the rounding you seek is at a comparatively smaller scale than the overall topo density you need to describe the form and to allow it to bend evenly. CBR
    1 point
  31. Hi CBR Thanks for confirming that for me, I should have realised sooner that it was something to do with the isoline, Ive not tried the filter thing yet, but comforting to know its a bug, I dont model very often and have to relearn all the things Ive forgotten 🙂 Nice topo is not something that comes naturally to me, so its good to see how it should be done. In this case I was after the specific corner radius's to match the spline I had been using, so I started out placing a disc on each corner to get the radius, then chopped those up and did the outline first before filling in the middle. Many thanks Deck
    1 point
  32. I can confirm that until very recently there was a bug where having isoline editing on but the SDS cage hidden in filters menu was not working, and indeed that seems to be what is going on with yours - we can simply see both the isolines and the base cage. As you have spotted, it will be resolvable simply by disabling isoline editing. And I can also reassure you there is nothing wrong with your actual mesh, other than a few easily solvable tris that probably don't matter if left ! However, if I was doing that Sub-D I would go for much lower resolution in the base mesh, and all the quads, like so... We have enough topo here to handle a fairly decent level of bending at the base mesh level, but if not, we can run the bend deformer at the same level as the SDS and bend that instead. CBR
    1 point
  33. It´s not a script. It´s plugin. Copy plugin folder into installation directory (where is Cinema4D.exe) into "plugins" folder. If not exist, just create folder named "plugins". Then you´ll find plugin in main menu Extensions. Click on it and in Object Manager drag choosen object into object link field. Splines from edges will be created immediately.
    1 point
  34. AMD FSR 4, AFMF 2.1 and RIS 2.0 Introduced in 2021, FSR is a GPU-based image upscaling system, equivalent to NVIDIA’s Deep Learning Super Sampling (DLSS) and Intel’s Xe Super Sampling (XeSS). It enables software to render the screen at lower resolution, then upscale the result to the actual resolution of the user’s monitor. The workflow improves frame rates over rendering natively at the higher resolution, without significant loss of visual quality if the upscaling isn’t too extreme. As well as games, FSR is supported by some CG software, including real-time visualization apps Lumion, where it is used in the editor and render preview, and D5 Render. Perhaps learning from the previous major release, FSR 3, which was only supported by two games at launch, AMD has made FSR 4 available as a drop-in replacement for FSR 3.1. Users with new Radeon RX 9070 Series GPUs can upgrade any game that already supports FSR 3.1, the current version of the technology, to FSR 4. That includes Call of Duty: Black Ops 6, God of War: Ragnarök and Marvel Rivals: you can see a list of the “30+ games” in which the upgrade is or will be available on AMD’s blog. The update provides a “significant” improvement in image quality over FSR 3.1 upscaling, improving temporal stability and detail preservation, and reducing ghosting. AMD Software: Adrenalin Edition 25.3.1 also updates AMD Fluid Motion Frames, its in-driver system for increasing frame rates by generating frames between those rendered conventionally. It was originally rolled out last year as part of HYPR-RX, AMD’s set of technologies for improving in-game performance. AFMF 2.1 improves the visual quality of the frames generated: according to AMD, it reduces ghosting, restores details, and handles on-screen text overlays better. As an in-driver technology, it works across “thousands of games”: AMD’s blog post has performance comparisons for Baldur’s Gate 3, Borderlands 3, Far Cry 6 and Forza Horizon 5. The Adrenalin Edition 25.3.1 release also features the first major update to contrast-adaptive image-sharpening system Radeon Image Sharpening since it was introduced in 2019. RIS 2.0 provides “stronger, more responsive sharpening in more use cases”, particularly video playback. As well as games, RIS is supported in a range of other apps, including web browsers like Chrome, Edge and Firefox, Microsoft Office apps, and the VLC media player. The FSR 4 upgrade feature, AFMF 2.1 and Radeon Image Sharpening 2.0 are available as part the 25.3.1 release of AMD Software: Adrenalin Edition. The FSR 4 upgrade is available for software that already integrates FSR 3.1, part of version 1.1.13 of the FidelityFX SDK. The source code is available under an open-source MIT license. FSR 4 and RIS 2.0 require a Radeon RX 9070 Series GPU; AFMF 2.1 is supported on Radeon RX 6000 Series and newer GPUs, and integrated graphics on Ryzen AI 300 Series CPUs. https://community.amd.com/t5/gaming/game-changing-updates-fsr-4-afmf-2-1-ai-powered-features-amp/ba-p/748504 Capsaicin 1.2 First released publicly in 2023, Capsaicin is a modular open-source framework for prototyping and developing real-time rendering technologies, primarily for games. It is designed for developing in broad strokes, creating simple, performant abstractions, not low-level hardware implementations, and is not intended for tuning high-performance tools. The framework is intended for developing Windows applications, but is GPU-agnostic, requiring a card that supports DirectX 12/Direct3D 12 and DXR (DirectX Raytracing) 1.1. AMD has used it in the development of its own rendering technologies, including an implementation of GI-1.0, its real-time global illumination algorithm. As well as the GI renderer, the framework includes a reference path tracer. Other features include readymade components for Temporal Anti-Aliasing (TAA), Screen-Space Global Illumination (SSGI), light sampling, tonemapping, and loading glTF files. The framework also includes HLSL shader functions for sampling materials and lights, spherical harmonics, and common mathematical operations, including random number generation. To that, Capsaicin 1.2 adds support for rendering morph target (blendshape-based) animations, in addition to its existing support for skinned characters. The update also adds support for meshlet-based rendering, for streaming and decompressing high-resolution geometry at render time, in a way similar to UE5’s Nanite system. Other new features include support for the .dds texture file format, used in games including Elden Ring and GTA V, and bloom and lens effects from AMD’s FidelityFX toolkit. The update also adds a range of new tonemappers: the framework now defaults to ACES tonemapping, with support for Reinhard, Uncharted2, PBR Neutral and AgX, the latter now supported in Blender, Godot and Marmoset Toolbag. The Capsaicin source code is available under an open-source MIT license. It can be compiled for Windows 10+ only, and requires a GPU that supports Direct3D 12 and DXR 1.1. Compiling from source requires Visual Studio 2019+ and CMake 3.10+. https://gpuopen.com/learn/new_content_released_on_gpuopen_for_amd_rdna_4_on-shelf_day/ https://gpuopen.com/capsaicin/ Flow Studio Autodesk has rebranded Wonder Studio, its AI-powered online platform for inserting 3D characters into video footage, as Autodesk Flow Studio. The change in branding officially makes Wonder Studio part of Flow, Autodesk’s cloud services platform for media and entertainment, but there are no changes to availability or pricing. In separate news, an update to the platform last month makes it possible to extract a camera track or clean plates from footage as standalone services, or ‘Wonder Tools’. In separate news, an update to Wonder Studio last month made extracting clean plates and camera tracking data from source footage available as standalone services, or Wonder Tools. Unlike when processing a complete Live Action project – that is, generating a new rendered video and supporting data – the new Camera Track Wonder Tool can be used on footage that does not include an actor. In contrast, the Clean Plate Wonder Tool can only be used to remove human actors from footage, not animals or objects, although it can be used for up to four actors per shot. The Autodesk Flow Studio platform is browser-based, and runs in Chrome or Safari. It does not currently support mobile browsers. Lite subscriptions have a standard price of $29.99/month or $299.88/year, can export rendered video at 1080p, and can export mocap data, clean plates, camera tracks and 3D scenes. Pro subscriptions have a standard price of $149.99/month or $1,499.88.year, raise the maximum export resolution of 4K, and also make it possible to export roto masks. Usage is credit-based: processing one second of video uses 20 credits. Lite subscriptions include 3,000 credits/month, Pro subscriptions 12,000 credits/month. The Terms of Service give Autodesk a non-exclusive licence to use any content created via the platform to develop its AI models. GIMP 3.0 The GIMP development team has released GIMP 3.0, the first major update to the free and open-source image editing and retouching software in seven years. The release adds non-destructive layer effects and text styling, off-canvas image editing, support for the Adobe RGB color space, and better interoperability with Photoshop. There are also a number of long-awaited usability improvements, including the option to multi-select layers. The first major update to the GNU Image Manipulation Program since 2018, GIMP 3.0 is a correspondingly significant release. The main change is support for non-destructive layer effects, making it possible to edit filters and image adjustments like Curves and Hue-Saturation after they have been applied. Adjustments can also be toggled on or off, re-ordered, or merged. Non-destructive (NDE) filters can be saved in GIMP’s XCF file format, making it possible to continue to edit layer adjustments after re-opening a file. The options for styling text have also been extended, with the Text tool getting new options for generating outlines around text non-destructively. A separate GEGL Styles filter – GEGL being the Generic Graphics Library, GIMP’s image-processing engine – also generates drop shadows, glows and bevels, as shown above. Other new features include support for off-canvas editing, with a new Expand Layers option for the paint tools making it possible to paint beyond the current boundaries of a layer. The layer is automatically resized to fit the new paint strokes, up to the boundaries of the image. There is also an experimental new selection tool, Paint Select, which makes it possible to select parts of an image for editing by progressively painting in the selection. GIMP 3.0 also features some long-awaited workflow improvements: notably, the option to multi-select layers, channels and paths. Previously, it was only possible to edit multiple layers by selecting and linking them individually. It is also now possible to organize layers into layer sets, and to search for layers by name; and copying and pasting now creates a new layer by default, not a floating selection. There is also a new Welcome dialog (shown above), which provides quick access to documentation, recent files, and UI settings, including color themes, and icon and font scaling. In addition, the Search command now shows the menu location of the action you have searched for in the search results. The release also introduces support for RGB color spaces beyond simple sRGB, making it possible to load and edit images with Adobe RGB color profiles without losing information. The change also “lays the groundwork” for support for the CMYK and LAB color spaces. Interoperability with Photoshop has been improved, expanding support for the PSD file format, and making it possible to load JPEGs and TIFFs with Photoshop-specific metadata like clipping paths, guides and layers. It is also now possible to import color palettes in Adobe’s ACB and ASE formats, as well as the open-source SwatchBooker palette. Other changes include support for new lossless image compression formats QOI and JPEG XL, and for DDS game texture files with BC7 compressions. And while it isn’t currently possible to edit in CMYK mode, it is possible to import and export CMYK JPEG, JPEG XL, TIFF and PSD files, with GIMP converting from RGB color space. GIMP 3.0 also introduces support for more languages for developing scripts and plugins: it is now possible to develop add-ons using JavaScript, Lua and Vala as well as C. The update also switches GIMP from Python-fu to standard Python 3. The changes break compatibility with add-ons written for GIMP 2.10, although some popular plugins like G’MIC are already available for GIMP 3.0. GIMP 3.0 is compatible with Windows 10+, macOS 11.0+ and Linux, including Debian 12+. Source code is available under a GPLv3 license. The software is free, but if you want to support future development, or support its core developers, you can find details of how to do so on the Donate page on the GIMP website. https://www.gimp.org/release-notes/gimp-3.0.html Feather 1.1 First released in 2022, Feather lets artists create 3D concept designs on iPads. It uses an interesting workflow, in which you first draw 3D guide surfaces, then draw strokes on top of the guides, rotating the developing sketch in 3D using touch gestures. The 3D models can be exported to other DCC apps in glTF or OBJ format, while a free Blender add-on makes it easier to edit assets created in Feather in the open-source 3D software. Users can also export 2D images, videos and turntable animations directly from the app. In 2024, the software, which had been available free in early development, became a paid app, and the Chrome web app was temporarily discontinued. Feather 1.1 improves workflow in the app when using an Apple Pencil stylus. Squeezing the Apple Pencil now brings up a customizable menu with commonly used tools and commands, including a new Find Group command to help navigate complex scenes. In addition, tools now respond dynamically when hovering the Pencil over them: for example, hovering over a 3D guide lets you preview brush size and color before beginning to draw. Users with the new Apple Pencil Pro also get haptic feedback when erasing lines, making selections, or sampling colors and brush properties. Sketchsoft pitches the tactile feedback as making sketching feel more immersive, increasing precision. Feather 1.1 is compatible with iPadOS 16.0+. A perpetual license costs $14.99. https://support.feather.art/docs/whatsnew NVIDIA Blackwell RTX PRO GPUs 96GB VRAM NVIDIA has unveiled the RTX PRO Blackwell series, its new range of professional workstation, laptop and data center GPUs based on its current Blackwell GPU architecture. The first two desktop GPUs, the 96GB RTX PRO 6000 Workstation Edition and RTX PRO 6000 Blackwell Max-Q Workstation Edition, will be available through distribution partners next month. NVIDIA pitches the RTX PRO 6000 Blackwell Workstation Edition – the larger, more power-hungry version of the card – as “the most powerful desktop GPU ever created”. The other desktop cards – the RTX PRO 5000, RTX PRO 4500 and RTX PRO 4000 Blackwell GPUs – will be available later in the year, along with the laptop and data center GPUs. NVIDIA describes the RTX PRO Blackwell series, which is being marketed at agentic AI, simulation, 3D design and VFX, as a “revolutionary generation” of GPUs with “breakthrough accelerated computing, AI inference, ray tracing and neural rendering technologies”. The new cards are the first professional GPUs based on NVIDIA’s Blackwell architecture, following the rollout of their consumer counterparts, the GeForce RTX 50 Series, earlier this year. All feature the latest iterations of NVIDIA’s key hardware core types: CUDA cores for general GPU compute, Tensor cores for AI operations, and RT cores for hardware ray tracing. NVIDIA claims that their fifth-gen Tensor cores and fourth-gen RT cores offer “up to 3x” and “up to 2x” the performance of their counterparts in its previous GPU Ada Lovelace architecture. Their Streaming Multiprocessors, into which the CUDA cores are grouped, have “up to 1.5x faster throughput” than their predecessors, and feature new Neural Shaders that integrate small neural networks into programmable shaders. The new GPUs use GDDR7 memory and PCIe 5.0 buses, increasing data transfer bandwidth over the Ada Generation cards, and support the current DisplayPort 2.1b display standard. The RTX PRO 6000 and 5000 cards also support NVIDIA’s Multi-Instance GPU (MIG) technology for partitioning a single GPU into smaller instances. NVIDIA has only released full specs for the top-of-the-range RTX PRO 6000 Blackwell cards so far, but they are a clear step up from the previous-gen RTX 6000 Ada Generation. Hardware core counts and compute performance are significantly higher, while GPU memory doubles to 96GB, and memory bandwidth almost doubles. Power consumption is unchanged – at least for the RTX PRO 6000 Blackwell Max-Q Workstation Edition, which features the same standard cooling design as its predecessor. That figure doubles to 600W in the RTX PRO 6000 Blackwell Workstation Edition, which uses a double flow-through design, although the increase in compute performance is much smaller. The specs for the lower-end cards, which also use standard cooling designs, still contain a number of ‘TBCs’, although we would expect them to follow a similar pattern. What we don’t yet know is how the Blackwell cards will stack up in terms of price-performance, since NVIDIA hasn’t announced recommended pricing. NVIDIA has also announced its RTX PRO Blackwell series laptop and data center GPUs. We don’t usually cover either on CG Channel, but you can see NVIDIA’s summary chart for the laptop GPUs above: as well as counterparts for the desktop RTX PRO 5000 and 4000 Blackwell, there are four lower-end cards, the RTX PRO 3000, 2000, 1000 and 500 Blackwell. Specs for the RTX PRO 6000 Blackwell Server Edition can be found on NVIDIA’s website. The new RTX PRO Blackwell series cards mark a change of branding from the current RTX Ada Generation GPUs, with the new ‘PRO’ tag clearly targeting them at professional users. What constitutes a professional user is perhaps less clear, since NVIDIA now acknowledges in its marketing for its Blackwell consumer cards, the GeForce RTX 50 Series, that they are used by both “gamers and creators”, with many CG artists using them for rendering. Rather than video production or VFX, the use cases cited in NVIDIA’s release announcement for the RTX PRO Blackwell cards are computational AI, health care, and design visualization. As a result, it’s hard to assess how the new GPUs’ specs will translate into real-world performance in CG applications. Of the examples given, the closest to DCC work is Cyclops, architecture firm Foster + Partners’ GPU ray tracing product for view analysis, which is described as running at “5x the speed” on the new RTX PRO 6000 Blackwell Max-Q Workstation Edition as on the NVIDIA RTX A6000 – its counterpart from two GPU generations ago, which is now over four years old. Electric vehicle manufacturer Rivian is also quoted as saying that the RTX PRO 6000 Blackwell Workstation Edition delivers “the most stunning visuals we have ever experienced in VR” in Autodesk’s VRED visualization software, but no actual performance figures are given. The RTX PRO 6000 Blackwell Workstation Edition and Max-Q Workstation Edition will be available via PNY and TD SYNNEX in April 2025, and via workstation manufacturers in May 2025. The RTX PRO 5000, 4500 and 4000 Blackwell desktop GPUs will be available “in the summer”. RTX PRO Blackwell laptop GPUs will be available via Dell, HP, Lenovo and Razer “later this year”. NVIDIA hasn’t announced recommended prices for the new GPUs yet. https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/ Unity 6.1 and Roadmap For artists, Unity 6.1 brings improvements in rendering performance, including a new Deferred+ rendering path in the Universal Render Pipeline (URP) for mobile and web games. It improves performance over the existing Deferred rendering path in complex environments, using “advanced cluster-based culling” to support more real-time lights. Both the URP and High Definition Render Pipeline (HDRP) get support for Variable Rate Shading, making it possible to set the shading rate for custom passes, improving performance without significantly affecting visuals. Variable rate shading is supported via Vulkan on Android and PC, via DirectX 12 on Xbox and PC, and on the PlayStation 5 Pro. Developers of Windows and Xbox games also get improvements in DirectX 12 performance, with a new split graphics job threading mode submitting commands to the GPU faster. According to Unity, it leads to a reduction in CPU time of “up to 40%”. DirectX 12 ray tracing performance has also been improved via Solid Angle Culling, to avoid rendering very small or distant instances, improving CPU performance by “up to 60%”. There are also a number of more general optimizations, leading to a reduction in ray tracing memory usage of “up to 75%”. The other new features primarily affect programmers as opposed to artists. They include a new Project Auditor for static analysis, which analyzes scripts, assets, and projects settings to help identify performance bottlenecks in a project. Build automation is also now integrated into the Unity Editor. There are also changes to platform support, particularly for Android games, including support for the larger 16KB page sizes introduced in Android 15. It is also now possible to match Vulkan graphics configurations to different Android devices, filtering by vendor, brand, product name, and OS, API and driver versions. Developers of extended reality experiences on Android get a number of changes, including integration with key Unity toolsets like AR Foundation and the XR Interaction Toolkit. Unity 6.1 also introduces support for Instant Games on Facebook and Messenger, and WebGPU support for mobile web games. Unity 6.1 is currently in public beta. The stable release is due in April 2025. The Unity Editor is compatible with Windows 10+, macOS 11.0+ and Ubuntu 22.04/24.04 Linux. Free Personal subscriptions are now available for artists and small studios earning under $200,000/year, and include all of the core features. Pro subscriptions, for mid-sized studios, now cost $2,200/year. Enterprise subscriptions, for studios with revenue over $25 million/year, are priced on demand. https://unity.com/releases/editor/beta https://unity.com/blog/unity-engine-2025-roadmap Features due later in the Unity 6.x series include a new Mesh LOD system. Unity describes it as providing “compact automated LOD generation in-editor”, making it possible to generate levels of detail for both static and skinned meshes directly inside the Unity Editor, rather than having to configure LOD levels in an external 3D modeling app. Unity’s new animation system will also be available as an official preview later during the Unity 6.x release cycle. Changes include support for procedural rigging for any skeletal asset, not just characters; and for remapping animations across assets with differing size, proportions and hierarchies. The slide above also namechecks a new animation blending system, with support for per-bone masking and layer blending, and pose correction via a new underlying rig graph. The new animation system will also feature a reworked hierarchical, layered State Machine capable of scaling to “thousands” of characters. At 32:27 in the video, you can see a demo of a crowd animation with 1,000 characters running in-editor at 30fps, although Unity didn’t say what hardware configuration it was running on. Changes to the physics system include a swappable physics backend, making it possible to switch between physics engines in project settings. The slide above mentions “initial support” for Havok Physics and PhysX, but Bullet Physics and MuJoCo were also namechecked in the presentation. The native Unity Physics system will get new solvers for “more complex and reliable” behaviors. Interface designers get updates to Unity’s UI Toolkit. Key changes include the option to render the UI directly in world space, for more immersive XR experiences, and to apply post effects like blur or color shifts. Support for vector graphics will reduce asset file sizes, and enable assets to scale across device screen sizes without loss of visual quality. It will also be possible to modify the ubershader without recreating it, allowing for “detailed adjustments to text, graphics and textures through familiar Shader Graph workflows”. Unity also announced updates to Unity Muse, its in-editor generative AI toolset. Changes include support for video-to-motion as well as text-to-motion for animation, making it possible to generate “nuanced animations” from smartphone reference footage. There will also be a new library of LoRAs for generating sprites, pre-trained for use cases like icons, props and platformer backgrounds. 3D mesh and texture generation and skybox generation are further off: presumably in the Unity 7.x release cycle. Other changes covered in the video include new live collaboration features, making it possible to edit files locally and have revisions synced automatically to the cloud. Programmers get improvements to the Entity Component System, the Unity Profiler, and new live game development features, covered in detail in the final section of the video. However, several key upcoming features that Unity had previously announced were not mentioned during the GDC presentation, and will presumably not arrive in 2025. They include updates to the world-building tools, the new unified renderer and Shader Graph 2, all previewed at Unite 2024 last year. You can find more details in this story. https://unity.com/blog/unity-engine-2025-roadmap
    1 point
  35. Yes - thank you! Plain effector did the trick.. Knew I was missing something simple!
    1 point
  36. No, I don't think there is anything we can do about that with a single setting, presumably because clone color is an attribute of the cloner object, so (correctly IMO) considered important to reflect in the viewport. It has always been this way as far as I recall, at least back to R11. That is not to say it doesn't annoy me sometimes, and I do wonder sometimes if we shouldn't ask for options in the basic / display tab that allow this not to happen, or at least not to preview in vp. But of course then we get into the massively complex question of what should happen instead, which is a nightmare if you think about all the possible options it could offer, and I have renewed sympathy for why it is left as it is ! The workaround(s) are simple enough though - we could just use a cylindrical field and a plain effector in color mode to turn one clone the highlight colour thusly... or any of the other ways the cloner offers to do that... CBR
    1 point
  37. I'm not sure what your intended scale is, but if it was a real-world size wrecking ball, they apparently start at 500 kg and go all the way up to 5.5 metric tonnes. If that's the case then it would take a devastatingly turbulent force to create the slack on the chain in your original frame. That chain should be pretty much dead straight if you want a realistic sim.
    1 point
  38. @bezo Thank you bezo! It worked! Bests, Eudes.
    1 point
  39. You could simply in top view draw spline (as path) you need, drag spline a bit up of surface and again in top view project spline (spline/move/project from main menu) on surface. While your spline will have only few points and curves are driven by tangents, you could set path in curves as you want. With adjusting interpolation points you can create smooth curves. If you want animate growing path, you could use standard sweep or use mospline. Sure, if will be used sweep, you need to offset spline according to "path thickness" geodesic_0001.c4d
    1 point
  40. His latest update was 4 days ago about ZBrush I think the next release will be on 20th of April ... almost a month from now
    1 point
  41. Several options here, but you were on the right track... 1. Displacer should be fine for this, but you probably want to use a circular gradient as the driving texture... ...in which you control the characteristics of the curve by affecting the knots in the gradient, and that white dot between them... Also check the various modes of that gradient - Linear, for example might be more suitable for this than an exponential mode. 2. Bezier Primitive. Control the peak on a high res patch with a much lower res grid. You can think of this as a plane with an FFD pre-applied ! 3. FFD On plane - the 'manual' version of the above. 4. Soft Selection. If you would prefer to edit a high res plane directly, then soft selection would be the goto here, allowing you to specify various profiles of diffusing components selections so that they affect adjacent (unselected) ones too. Dome Mode shown below... 5. Saving possibly the best until last, I would personally use Fields for this, implemented via a Plain effector which is a child of a plane, and has its deformation mode set to Points. This, I suspect is going to be most useful because if you use the contour controls in the remapping tab of a spherical field, it lets you control the character of the curve directly via a spline... like this. ...which would give us this sort of result... So that's most of the non-mathematical approaches with the actual character / interpolation of the ramp defined in a number of different ways... The maths-based stuff I will leave to someone more into that than I am ! As for the Atom Array aspect of your question - That will work as a parent of any of the setups above, but if you'd rather not wrangle the unnecessary vertex spheres in that, so will cloning cylinders to edges (scale edge 100%) or stealing the topology using Edge to spline (still available parametrically via a correction deformer) and sweeping the result. CBR
    1 point
  42. There's the legal situation and then there's the real-world situation. Be sensible about it. I'm not putting up my work on a widely publicised website where perhaps the wrong person will see it and get annoyed. Instead I'm simply putting together a portfolio and then adding my link on my resume when applying for work so people can see what I can do. Personally I take the route that if the artwork is in the public and I am the one who produced it, then I will show it to whoever I need to to get my next job. If you ask for permission then everyone will say no. So don't ask for permission.
    1 point
  43. It seems to me we are doing mostly fine until the moment it impacts the wall, and then the rebound goes all weird, and the wrecking ball floats and bobs up and to the right in a way that definitely doesn't look like natural rebound, and agreed, seems to defy gravity thereafter. It may be of some comfort to note that I don't find anything overtly 'wrong' with your settings so far, and remain as mystified as you must be to why it is behaving like that, and requiring such high sub-step and iteration levels to even achieve some level of stability without falling apart. I will investigate this further as time permits, which I appreciate doesn't help you in the meantime, but I am interested to know the thoughts and findings of anyone that is a lot more familiar than I am with the foibles and quirks of the new integrated dynamics and may be able to get to the answer a lot quicker than me. CBR
    1 point
  44. New Tutorial: And a demo project file: stylized-wooden-ladder_vD01.c4d
    1 point
  45. CORE4D has got a great beginning series: My Scene Nodes 101 series isn't exhaustive but has some simpler examples: For more advanced training, Dominik Ruckli has some great videos: Dominik Ruckli - YouTube Additionally - once you've got a handle on Scene Nodes data structures and beginning/intermediate setups, you can generally follow along with Houdini / Geo Nodes tutorials. For those, it is hard to do better than Entagma: Entagma - YouTube
    1 point
  46. It would be great to see inside how it was created.
    1 point
×
×
  • Create New...