Leaderboard
Popular Content
Showing content with the highest reputation since 03/24/2025 in all areas
-
There are some very interesting new features. Particles have an extensive list of new functionalities from which the following stand out. The Field-driven density distribution of particles is highly appreciated along with in-built noise shaders for both distribution and emission. Density control shaders and Fields is something I've been asking for MoGraph for years. Finally it can be achieved through particles. Noise Sampling for Color and Data Mapper https://help.maxon.net/c4d/en-us/Content/Resources/Images/2025-2_Particles_NoiseSampling_01.mp4 https://help.maxon.net/c4d/en-us/Content/Resources/Images/2025-2_Particles_NoiseSampling_02.mp4 Neighbor Search algorithm for Flock, Predator Prey and Blending Similar to the geometry density coloring effect I've been asking. Some Scene Node Capsules are now directly available with other primitives from drop-down menus. This was something I was actively arguing about from early versions of Scene Nodes as it was a closely UX-related issue. Things were pointing to parallel ecosystems of the same tools being developed under the same application or tools that were not that easily accessible that would inevitably lead to confusion or frustration among new and old users. It wasn't too long ago you could finally make capsules accessible from custom pallets but that required from users to know how to do that and as SceneNodes was so actively characterized as experimental or system for advanced users, people would prefer staying away from it missing some key features that were not that hard to use after all. Personally I extend my custom Layout with capsules supported by the OM every new version. Now the Line Spline is available along with the Break Spline, Branch Spline (which is an easier MoSpline Turtle), Catenary Spline, Dash Spline, Electric Spline, Partition and Pulse Spline. What I don't know is if they re-developed those tools in C++ or if they are just links to the node implementation with modernized icons to fit with the rest of UI. Well, there are still some argument about what should be characterized as a Generator what as a Modifier and what as a Deformer... For example the Branch Spline is more fitted to the Generators club as it does create additional splines on top of the original but I guess they run out of ideas on how to represent it with a new icon to avoid confusion with the MoSpline... Which leads to other old arrangement arguments like why don't we just have new features part of older tools as modes... For example the Branch Spline as Mode of the MoSpline... Break, Dash, Partition and Pulse modes of a single spline deformer/modifier or Wrap, Shrink Wrap and Spherify as modes of a single deformer... Looking forward to seeing all distribution nodes as modes in the Cloner and Blue Noise as a mode of the Push Apart Effector. New mode for the Look at Camera expression I always thought this to be an old remnant from earlier C4D versions... I never used it because I achieved the exact same effect using the Target tag... I still don't know what the difference between them is... Some unfortunate translation issues: Weird title, not available in English (fixed the next day) https://help.maxon.net/c4d/en-us/Default.htm#html/OFPBLEND-FP_BLEND_OBJECT_OUTPUT.html MAXON ONE capsules are not documented... Bad marketing as other users don't know what they are missing Things that we saw in the teaser video but they are not documented (yet) Constellation Generator (Plexus effect) Liquid Simulator (yeah... the thing that should be teased the most was not)4 points
-
Maxon Unveils Game-Changing Cinema 4D 2025 Update: Enhanced Modeling, Texturing, and Scene Nodes March 31, 2025 – Los Angeles, CA – Maxon has officially announced the highly anticipated Cinema 4D 2025.1.4 update, promising groundbreaking new features that will redefine the 3D animation and motion graphics industry. This latest iteration of Cinema 4D introduces significant improvements to modeling, texturing, Scene Nodes, and animation workflows, ensuring a more seamless experience for artists. Official Statement from Maxon’s CEO “Cinema 4D 2025.1.4 is not just an update—it’s a revolution. We’ve listened to our users and implemented features that streamline workflows, accelerate creativity, and push the boundaries of what’s possible in 3D design. Our latest enhancements to modeling, texturing, and Scene Nodes will empower artists like never before.” — David McGavran, CEO of Maxon Key Features of Cinema 4D 2025.1.4: 🔹 Enhanced Parametric Modeling Tools – New and improved parametric objects offer greater control, including advanced spline editing and interactive beveling. 🔹 Advanced Scene Nodes 2.0 – A major update to Scene Nodes introduces a more intuitive interface, expanded procedural modeling options, and real-time performance boosts. 🔹 Improved UV Packing and Unwrapping – A completely reworked UV workflow includes automated packing, distortion minimization, and island grouping for better texturing efficiency. 🔹 Neural Render Engine (NRE) – Cinema 4D now features an AI-powered render engine that reduces render times by up to 90% while maintaining photorealistic quality. 🔹 Real-Time Path Tracing – A fully integrated real-time path tracer allows users to see near-final renders directly in the viewport. 🔹 Auto-Rigging AI – Character animation just got easier with an intelligent rigging system that auto-detects joints and optimizes weights instantly. 🔹 HoloC4D VR Integration – For the first time, users can sculpt, animate, and render in a fully immersive VR environment. 🔹 Redshift Cloud Render – A new cloud-based rendering system allows users to offload heavy render jobs and access high-performance GPU farms directly from Cinema 4D. 🔹 Deepfake MoGraph Generator – A new AI-assisted tool that generates realistic face animations from a single image input. 🔹 AI-Assisted Material Suggestion – The new AI-driven material engine suggests realistic textures and shader settings based on your scene context. Spline SDF Smooh blending of 2D splines. Outline Capsule Unlimited number of oulines with the new Outline Node and capsule Chamferer The new Chamferer generator brings non-destructive, parametric editing on individual spline control points Field Distribution Cloner Mode Fields can now be used to place children instances by mapping the hierarchy of the children to the intensity of the field. Release Date and Availability Cinema 4D 2025.1.4 will roll out as a free update for all Maxon One subscribers starting April 1, 2025. Perpetual license holders will have the option to upgrade at a discounted rate. For more details, visit www.maxon.net. Exclusive Interview with Maxon’s CEO, David McGavran Q: What was the main focus for Cinema 4D 2025.1.4? David McGavran: “Our primary goal was to refine and enhance the tools that artists use daily. We wanted to create a more intuitive and efficient experience, whether it's through procedural modeling, advanced texturing, or improved rendering. The new Scene Nodes 2.0 is a huge leap forward, allowing artists to build complex structures with ease.” Q: How does this update compare to previous ones? David McGavran: “While every update brings innovations, this one focuses on practical enhancements that improve workflow speed and creative flexibility. We've also made significant improvements to UV workflows, Boolean modeling, and dynamic simulations based on user feedback.” Q: Can you give us a sneak peek into the future of Cinema 4D? David McGavran: “Absolutely. We are already developing features for Cinema 4D 2026 that will push procedural modeling even further. Expect deeper Redshift integration, better scene organization tools, and an overhaul of particle simulations. We're also exploring more real-time collaboration features to make remote workflows even smoother.”4 points
-
Well.... I've asked (and still asking) for a Helix 2.0 with myltiple modes like Logarithmic. Archimedian and Double but primitives won't ever change in fear of compatibility break with older projects. But you could try it just on a Symmetry Object ...3 points
-
I think you may have this backwards. In plain language: Straight alpha: Stores the full RGB colour for each pixel, but ignores how transparent it may or may not be. ie the transparency of a pixel has no impact on the colour stored. This means for example if you render a white cloud with a soft whispy edge in a blue sky, the rendered cloud will only contain white cloud colours, the blue of the sky will not be present in the rendered cloud, even where the alpha transparency eats into the cloud. Premultiplied: This simply means the image being rendered has already had the background baked into the rendered colour data. In the cloud example it means that it will start to turn blue as the edge of the cloud becomes more transparent. In practical terms, straight alphas can be great because there's no bleeding of the background into the visual RGB data, you can take your white cloud and throw it onto any background you like, there wont be any blue from the sky creeping in. On the other hand If you place your premultiplied cloud onto an orange sunset background, youll get a blue halo around the cloud, which sucks. However.... It isn't all roses. Sometimes you need the background colour to be baked into the transparent edge because some things are just flat out impossible to render due to the number of layers present or the motion on screen. Here's one which screws me over regularly; what happens if I have a 100% transparent plastic fan blade, but the fan blade is frosted. And in the middle of the fan is a bright red light. Visually the fan blade has the effect of looking like a swooshing darth vader light sabre. Its bright red and burning white from the brightness, but whats there? a 100% transparent object.... The alpha channel with a straight alpha will obliterate my rendering, its 100% transparent plastic. You can see it, but the alpha channel decides to ruin your day and now the rendering is useless. The only option here is a premultiplied alpha where the background is baked into the motion blur and SSS of the plastic fan blade. Sure, I need to make sure my 3d background somewhat matches my intended compositing background, but its the only way to get any sort of useful render. Same goes for motion blur, DOF blur, multiple transparent layers in front of each other (steam behind glass) The honest answer is, use whichever one is least likely to screw you over. If you have lots of annoying transparent/blurry things to deal with, go premultiplied but plan your background ahead of time. If you want clean alphas on a simpler object render, go straight alpha. I haven't read your linked blog all the way through, but I will say... there are an abundance of wrong people loudly proclaiming themselves to be fonts of all knowledge. Theres one in the octane community who insists on inserting himself into literally every thread on the entire octane forum to tell you youre an idiot for using a png file, he has 100's of blog pages which are a strange mix between 3d rendering and flat earth magical woo woo maths to show everyone just how right he is. That said, your rainbow example does match up with what the blog says. The only difference is the blog seems to think the straight alpha is evil and you should only use the premultiplied, whilst I would say both have their uses, with straight being preferable when possible.2 points
-
Hey Jeff - Maxon is recording the NAB presentations and plans to post them 1-2 weeks after the show.2 points
-
There are, if we look carefully, a number of things subtly wrong with the settings in this scene. As HP predicts above, Render Perfect on the sphere is one of them, and fixes the effect of collision deformer not showing in render if we turn that off. Next we had some suspect settings in the sweep object (parallel movement should be off, Banking should be ON, and End Rotation should be 0 degrees), which fixes the twist in your path spline over the indent, and restores it to correct, contiguous circular form all the way along its length... The mode of the Collision Deformer should be Outside (volume) and the capsule should ideally not be a child of it, though it doesn't particularly matter in this scene. Likewise object order in the OM could be better bearing in mind that Cinema scans downwards through it from the top. Below is more ideal I would say, and general good practice. Next, the curvature in your path spline within the sweep isn't quite matching the curve of the indent in the sphere, resulting in some unevenness in that area. This can be fixed by getting a new path spline for the sweep, which we get by doing Current State to Object on the main sphere (once collided) and then Edge to spline on the centre edge loop of that, but first we have to fix the collision object, which should also ideally be a sphere so that we can choose Hexa type and thereby avoid the complex pole on the end of the capsule it replaces, which was previously confusing the collision deformer, and producing some wanky / uneven collision vertices in the main sphere at the apex of the collision point, which would obviously effect any spline you subsequently derived from it. In my version I wanted a better, higher resolution sphere, but not one that caused the collision deformer any extra work, so changed the type of that sphere to Hexa as well, which then necessitated addition of a Spherify deformer before the Collision Deformer (because hexaspheres are not mathematically spherical out of the gate). And then lastly that went under SDS to give us improved resolution, and my Current State to Object was performed on the SDS instead of the sphere itself, for maximum spline matching. Anyway, I have fixed all that in the scene attached below... groove-on-sphere CBR Fix.c4d CBR2 points
-
Fun facts: The text was AI generated. I did not expect a reference to the CEO by name. Later I asked for an interview. It wasn't an elaborate prompting at all, I asked three times all and all. I was clear of my intensions so it knew it was supposed to be an April Fools article. I asked for C4D screenshots for the next version... It kept drawing Blender elements on them. Curiously there's a repeating pattern that resembles a mix of the old logo of the Blue Pearl with the splash screen of R21 (see last two images) I removed the AI image from the article the next day just because it was too cursed... I later thought I could make the YT image to link to a Rick Roll video but it was too late. The Rocket Lasso image is a manipulated version of the S26 I asked for a new C4D logo but it refused, there must be a limit of 3 generated images per day I guess. The Chamferer icon is the CV-Chamfer icon. I designed the Spline SFD icon by templating the Spline Mask icon. I planned this prank 4 days in advance. Spline SFD, Chamferer, Outliner and Field Distribution are all real but not as versatile to construct as they are presented. I did not expect the Spline SFD to be that fast in viewport. The Field Distribution was the most time consuming as I had to construct an XPresso rig to manipulate 4 coencentric Torus Fields from the radius of the visible Spherical Field... real pain in the a$$ and didn't manage to link the Inner Offset to the radius of the inner most Torus to make it more believable. Fortunately I didn't have to spend extra time to recreate the UI of the Cloner as I had already done this a week earlier as a future sugggestion for MAXON.2 points
-
Dear members We are happy to announce brand new Youtube channel with focus on C4D node system. In our research we realized that training content for nodes is very sparse and lacking in quality and depth. We already have two lessons available which will be part of ongoing series. Humble request from our side is that, even if you are not into nodes, please subscribe in order that the channel grows which will enable us to monetize it down the road. On longer timeline this can enable us to reduce the subscription period or even remove it alltogether and open up forum for many more artists. https://www.youtube.com/@CORE4D Thank you and enjoy the content! P.S. Any member willing to contribute to the channel is more than welcome - drop us a message : )2 points
-
@b_ewers 1. Change the Maxon Noise > Input > Source to UV/Vertex Attribute, so that the noise samples in 2D Texture Coordinate (UV) space, rather than 3D space. 2. Significantly reduce the overall scale from 100 to something like 1.2 3. Adjust the relative scale to something like 1, 20, 1 to get the vertical streaking. 4. Increase contrast to 1 5. Change the noise type to Turbulence uv-space-noise_v01.c4d1 point
-
1 point
-
I was absolutely perplexed by something that seemed so simple at first. Later on, I acquired some mental trauma from tracking down a particularly nasty bug around alpha and PNGs. You'd probably not be surprised that once one becomes more or less familiar with the nuances around alpha channel handling, that nuanced bugs can crop up in even the most robust software. So... yes. 🤣 This is where I encourage folks to gain enough confidence in their own, hopefully well researched, understanding. Eventually, it enables folks to identify where the specific problem is emerging. The skull demonstration has zero alpha regions within the "flame" portions. Following the One True Alpha formula, the operation should be adding the emission component to the unoccluded plate code values. There is no "scaling by proportion of occlusion" to the plate that is "under", as indicated by the alpha being zero. The author has some expanded commentary over on the issue tracker. The following diagram loosely shows how the incoming energy, the sum of the green and pink arrows, yields a direct "additive" component in the green arrow, and the remaining energy indicated by the pink arrow, scaled by whatever is "removed" in that additive component, is then passed down the stack. If this seems peculiar, the next time you are looking out of a window, look at the reflections "on" the window. They are not occluding in any way! Similar things occur with many, many other phenomena of course. For example, in the case of burning material from a candle, the particulate is effectively close to zero occlusion so as to be zero. Not quite identical to the reflection or gloss examples, but suitable enough for a reasonable demonstration. Sadly, Adobe is a complete and utter failure on the subject of alpha for many, many years. If you crawl over the OpenImageIO mailing list and repository, you will find all sorts of references as to how Adobe is mishandling alpha. Adobe's alpha handling is likely a byproduct of tech debt at this point, culminating with a who's who of image folks over in the infamous Adobe Thread. Zap makes a reference to the debacle in yet-another-thread-about-Adobe-and-Alpha here. Gritz, in the prior post, makes reference to this problem: You can probably read between the lines of Alan's video at this point. So this is two problems, one of which I've been specifically chasing for three decades, and one that relates to alpha. As for the alpha problem, I cannot speak directly to AfterEffects as I am unfamiliar with it, but the folks I have spoken with said the last time they attempted, Adobe still does not properly handle alpha. I'd suggest testing it in some other software for compositing just to verify the veracity of the claims folks like myself make, such as Nuke non-commercial, Fusion, or even Blender. All three of those should work as expected. My trust that Adobe will doing anything "correct" at this point, is close to zero. As for "colour management"... that is another rabbit hole well worth investigating, although it's probably easier to find a leprechaun than pin down what "colour management" in relation to picture authorship means to some people or organizations. Keeping a well researched and reasoned skepticism in mind in all of these pursuits is key. 🤣1 point
-
Faking it can be done with tutorial above using multitude of tools you have at disposal. I think it would be quite interesting to have a physically correct rig for this 🙂1 point
-
I haven't done this myself so far, but I suspect the best way to go here might involve cycloid splines, blend mode cloners and Field Forces. Searching around those terms I found this tutorial from Insydium. Obviously, they are using their own particle system, but there is no reason to think the native one couldn't also do it... though I am probably not the best person to advise on the specifics of that; I don't have much experience in Cinema particles yet ! CBR1 point
-
Apologies... I can only post one post per day. Probably better that way... 🤣 The issue, as the linked video in the previous post gets into, is sadly nuanced. The correct math within the tristimulus system of RGB emissions is remarkably simple. Sadly, it is just that software and formats are less than optimal. The dependencies of the "simple compositing operation" are: 1. Software employed. Many pieces of software are hugely problematic. See the infamous Adobe thread as a good example. 2. File formats employed. Some file encodings cannot support the proper One True Alpha, to borrow Mr. Gritz's turn of phrase, such as PNG. 3. Data state within software. Even if we are applying the proper mathematical operation to the data, if the data state is incorrect, incorrect results will emerge. The short answer is that if we have a generic EXR, the RGB emission samples are likely normatively encoded as linear with respect to normalized wattages, and encoded as gained with respect to geometric occlusion. That is, in most cases, the EXR data state is often ready for compositing. If your example had a reflection off of glass, a satin or glossy effect, a flare or glare or glow, a volumetric air material or emissive gas, etc., you'd be ready to simply composite using the One True Alpha formula, for the RGB resultant emissions only^1: A.RGB_Emission + ((100% - A.Alpha_Occlusion) * B.RGB_Emission) Your cube glow is no different to any of the other conventional energy transport phenomena outlined above, so it would "Just Work". If however, the software or the encoding is broken in some way, then all bets are off. That's where the video mentions that the only way to work through these problems is by way of understanding. Remember that the geometry is implicitly defined in the sample. In terms of a "plate", the plate that the A is being composited "over" simply lists the RGB emission, which may be code value zero. As such, according to the above formula, your red cube RGB emission sample of gloss or glow or volumetric would simply be added to the "under" plate. The key takeaway is that all RGB emissions always carry an implicit spatial and geometric set of assumptions. This should never happen in well behaved encodings and software. If it does, there's an error in the chain! JKierbel created a nice little test EXR to see if your software is behaving poorly. Hope this helps to try and clear up a bit of the problem surface. See you in another 24 hours if required... 🤣 -- 1. The example comes with a caveat that the "geometry" of the sample is "uncorrelated". For "correlated" geometry, like a puzzle piece perfectly aligned with another puzzle piece, like a holdout matte, the formula shifts slightly. The formula employed is a variation of the generic "probability" formula as Jeremy Selan explains in this linked post. If we expand the formula, we end up with the exact alpha over additive formula above. It should be noted that the multiplicative component is actually a scaling of the stimuli, based on energy per unit area. A more "full fledged" version of the energy transport math was offered up by Yule and (often misspelled) Neilsen which accounts for the nature of the energy transport in relation to the multiplicative mechanisms of absorption attenuation, as well as the more generic additive component of the energy.1 point
-
Hello all. Flattered to be mentioned here. I just wanted to point out that the statement is not quite correct; the result will indeed include an emission component in the RGB after the composite. With associated alpha (aka “premultiplied”) the emission is directly added to the result of the occlusion. This happens to be the only way alpha is generated in PBR-like rendering systems, and is more or less the closest normative case of a reasonable model of transparency and emission. It also happens to be the sole way to encode any additive component like a gloss, a flare, a glare, fires or emissive gases, glows, etc. I’ve spent quite a few years now trying to accumulate some quotations and such on the subject, from luminaries in the field. Hope this helps.1 point
-
From what I understand from all this is that the premultiplied method does not keep a separate Alpha map of the image. This means that it encodes the Color and Alpha channels on the same bitmap, hence the name as the process of embedding the alpha map has already been done before previewing/opening the image. The straight one is like the raw data. You have two separate data structures, the color and alpha separate. It gets multiplied to process/view correctly by multiplying the corresponding values after the fact. With this you don't have to extract the alpha map if you need it as it is already available.1 point
-
Asked ChatGPT, gave this answer R: 255, G: 0, B: 0, A: 0 R: 0, G: 0, B: 0, A: 01 point
-
The Classic - Carpet roll! 120_Carpet_Roll(MG+XP).c4d1 point
-
If I were in this situation, I would disconnect my machine from the internet (kill wifi, pull ethernet), because then theres a good chance it will simply timeout after 30 seconds.1 point
-
I haven't watched the series yet myself, but that title sequence is wonderful. And the music is as harmonically interesting as the visuals and techniques are to us CG guys. Perfect alternating consonance and dissonance. For those of you who have an interest in such things, here's Charles Cornell to explain why that's also great ! CBR1 point
-
1 point
-
I started in on the OpenPBR material with the new Redshift. I was surprised to see it in there till I saw that Autodesk also included it in Arnold this last release as well. The future is now... OpenPBR. Saul Espinosa says it's the only shader he uses now (RS + Houdini). Subsequently, I just sent a ticket to maxon to have it included in the default material dropdown.1 point
-
1 point
-
This looks to me like it should be as simple a thing as just modelling a groove into the sphere directly, unless there is some special (or spatial !) reason you need that element to be procedural ? If that is the case, then a simple boolean subtraction should do it - sphere and a thin torus under the new boolean object in that order will get you this sort of result... That'll be a very quick and easy way to go, but won't give you flawless topology if that is important at all... The manual way needs a small amount of prep for ideal topology in that I would make the base sphere out of a merely 8 segment (hexa) sphere under SDS, and then in a group with a spherize deformer to restore perfect roundness and provide a very solid topological base to work from, which will work more nicely with a collision deformer from any angle, because there are no complex poles at the actual poles of the sphere like there would be with a Standard sphere primitive... ... and once we have that manually modelling in a groove manually is fairly trivial operation mainly involving bevelling the highlighted edge above and extruding the resulting poly ring in a bit / shaping it as desired by scaling its component edge loops, like so... Let me know if you don't understand stuff, or if I am missing something important... CBR CBR1 point
-
For animation I would use Mospline since it has a lot of animation posibilities/parameters.1 point
-
You can reference into graph as classic object. Geometry port is under "op" group port. To import use ctrl+drag from object manager into graph, that will create a copy of geometry in the graph itself.1 point
-
@DasFrodo You are getting just geometry from the cubes without transforms (matrix), transforms are needed.1 point
-
You may find some example for iteration in the capsule file pit thread. If the link works, it will get you there + to a comment where more links to resources are mentioned. Great place for better understanding of Capsules + Scene Nodes. 🙂1 point
-
Use as child Connect object and put all geometry objects there. Or use objects links input (drag objects in OM which you want use into objects links panel), decompose it and transform geometry with iterations of geometries and matrices to clone onto points node. But I think first solution is more understand-able 🙂1 point
-
New around here and back in C4D land after a VERY long break. I was an avid Modo user and also I'm a former Foundry employee from back around the time of the acquisition 😅 although I was a designer, not a product manager but I do still have a bunch of friends there and sadly when development stopped, some of my friends got laid off. The TL;DR of why modo probably isn't around is its small foothold in professional markets. It's the best modeller out there, nothing else even comes close. There's one or two people in most VFX studios and they're generally very happy and very productive. But Modo in its early days had a huge number of hobbyist users. Post buyout, they started leaving in droves, in part initially due to the Foundry wanting to push it further into VFX (Which didn't work) and then they pivoted to a number of product design contracts and focused a lot of development there. The hobbyists initially frustrated and then priced out, they left mostly for blender. A lot of professional users stuck around, but those relationships often became more strained over time. Modo was never the most stable application and it's had a rocky history with stability depending on release. Some might bring some awesome features, but it also might crash 10 times a day. A lot of them left too. In terms of features though, there's a lot of incorrect info out there about Modo - it had a rep for only being a modeller, which just wasn't true, it had a bunch of awesome features: Dynamics, some pretty decent mograph/replication tools, sculpting and painting (About on par with boy paint FWIW), a kickass (But CPU only) renderer, super intuitive material workflow, some fairly decent animation tools and some crazy customisation capabilities. But it was very much a jack of all trades and none of those aspects were strong enough compared to the modelling. It was either "it only does modelling right?" or "Every feature apart from modelling sucked" the truth as always was somewhere in the middle. It covered a lot of the same ground as Blender, with a lot of the same weaknesses - it just cost much more 😂 I imagine unless you were a serious modeller and super committed it became a very hard sell. I've now made the decision to move the bulk of my work back to C4D, as I'm not a huge fan of Blender and hate Autodesk with a passion. I've always had a soft spot for C4D and it was my main package for about 5 years before Modo. I'm not a super serious modeller and most of what I do is product shots and mograph adjacent things. I'll still be doing my modelling in Modo for the foreseeable though, they've issued a 10 year EOL license that anyone can get ahold of. My commercial license runs out next month. ...that was quite a long TLDR, but someone might find it interesting IDK.1 point
-
PlantFactory2Blender It was only a matter of time before we saw some bridge between VUE and Blender. https://roberd.gumroad.com/l/PF2B https://blendermarket.com/products/pf2b1 point
-
Here is a simple PSD morph example. The joint is driving the biceps/triceps bulge, deliberately exaggerated for display purposes PSD_Morph.c4d1 point
-
CORE4D has got a great beginning series: My Scene Nodes 101 series isn't exhaustive but has some simpler examples: For more advanced training, Dominik Ruckli has some great videos: Dominik Ruckli - YouTube Additionally - once you've got a handle on Scene Nodes data structures and beginning/intermediate setups, you can generally follow along with Houdini / Geo Nodes tutorials. For those, it is hard to do better than Entagma: Entagma - YouTube1 point
-
1 point
-
1 point
-
Jut after a few minutes of testing and a few crashes, I think OpenPBR shader doesn't work with Material translation (Viewport/Export) Baking enabled. I was reworking nodes and it kept crashing. I turned it back to Draft and so far no crashes.0 points