Leaderboard
Popular Content
Showing content with the highest reputation since 06/11/2025 in all areas
-
Wetmaps: I created a vertex map in field mode on the dinosaur and put the liquid mesher (with a bigger voxel size so it renders faster) into the field list. In the field settings, the mesher was set to volume, and I put a decay field on top with 100% so it stays wet where it has been touched by the mesher. I rendered the vertex map just as black/white mask, so I could use it in comp to darken the original render. Additionally, I rendered a seperate, very reflective pass of the dinosaur, that I added on top in comp, again using the wetmap as mask. trex_water_v007_Wetmap.mp4 I then comped everything together in Nuke. I tried to make a smooth transition between the simulated area around the dinosaur, and the rest of the lake, but that was a bit of a challenge and didn't work out to my full satisfaction...it's still noticeable that the simulation doesn't spread out to the rest of the lake. Some final thoughts: AFAIK, our liquids solver is an SPH solver, which is generally designed for small-scale fluids. FLIP would probably be better suited for fluids of that scale. I find it even more impressive that the new solver is capable of pulling this off. It's also very speedy and very stable. Fully art-directable as it works with all the forces, all other simulation types and all the particle modifiers. I enjoyed that little project.5 points
-
Here's a little project I made while I was betatesting the new liquids. trex_water_v006_high.mp45 points
-
Here are some insights from that little project: Easy things first. I reused the animation from my T. rex museum breakout: The environment is an HDRI from the asset browser. The lake (or river) is just a reflective plane with animated noise, and a gradient in the opacity channel to fade it softly into the backplate HDRI. The trickier part was the simulation, naturally. I wanted to simulate only the top layer of water, and I needed to figure out a way to only simulate particles in the area around the dinosaur. A whole lake would have been way too many particles of course. At first I tried to simulate it only in a "pool" that was a bit bigger than the T. rex, using the liquid fill emitter. It worked, but I had visible splashes at the border of the pool - unfortunate if you want the transition between simulation and the rest of the lake as seamless as possible. Instead of walls, I then gave the pool a slightly rising floor towards the edges. That helped mitigating the splashes at the border. However, I still had the problem that the liquid spreaded out pretty fast and drifted away from the dinosaur, where I actually wanted it to be. The beauty of the new liquids is that they work so seamlessly with the rest of the particles system: I could just use an attractor force in the center of the dinosaur. With the right settings, it helped keeping the particles near the T. rex, while still allowing it to act like a liquid. Here's the cached particles with the collider geo: trex_water_v007_Viewport.mp4 The sim contains 3.7 million particles. I used the liquid fill emitter with a radius of 0,7 cm. Sim time was IMHO very reasonable on my RTX 4090, around 20 mins for 150 frames. I would have liked to simulate even more particles, but when I lowered the radius to 0,6 cm (= 6 million particles), I didn't even see the first frame after waiting for several minutes, so I quit that. The cache is 27 GB. I only cached velocity, color and radius, which helped to reduce the size. I then used the liquid mesher with pretty much the default settings, but smaller influence scale, a lot more smoothing and the droplet size set to 10% - that really helps getting rid of those huge blobs that often appear. Here's a clay render, liquid mesh only: trex_water_v007_Clay.mp4 The particle color is mapped to the velocity, which was very handy to fake whitewater. I just remapped the particle color with a ramp to diffuse and reflection strength. The faster the particles, the brighter they are. I rendered them as a separate pass and added it on top in comp. Those particles render crazy fast! A few seconds per frame for 3,7 million particles with motion blur... that was a real joy. trex_water_v007_Whitewater.mp44 points
-
I haven't tried the update yet, but I feel need to comment on the presentation... It was sooo underwhelming o_O UDIM... ok, read about that, but why not show a real-world-problem, then the new tools, then the superior final product? Jonas focused way too much on UV-Layouts, and in the end, I still have no idea what benefit I get from this tool. Faster workflow? Higher quality? I don't know. Jonas leaving the presentation was really awkward, why not choose hosts who have enough time to spare for talking to us customers? Laubwerk Assets now available in the asset browser - nice! But why show the variations in the viewport and not with a nice render? I have no idea if these assets are any good (I guess not, since it's all somewhat procedural?) Redshift now officially supports nvidia RTX5000 series. Well, then please give us at least some example benchmarks! "Render 30% faster than on a 4090" would have been really good information. Or show the interactivity boost of IPR on an 5090, to get some "ooOOOOhs" out of the audience. Redshift New OIDN update... cool, but please: zoom in and show an before/After-comparison of the Denoising quality! Give us the dirty details, in this case, they make all the difference. Redshift Portal lights updates... I mean, come on. That was changelog-material, but for the bottom part. No need to show a line on the gizmo to strech out the time until the gread liquids reveal... ... and then Noseman chooses this laggy, 1-fps-scene of a mildly irritating chocolate stream an an alien cake to introduce the world to C4D's long awaited new liquid system? After getting so much flak for the gingerbread teaser? I mean, the gingerbread-teaser's erratic particles are smoothed out, but this looks like liquid tech from the time they made Terminator 2. And then there's scene after scene after scene where we only get super simple, manual-style scenes showing some aspects of the new tech, but Noseman rushes through as if he was embarassed by the shortcomings of the new tech. And there seem to be many? There was not even a mildly complex example; performance was really sub-par (why not get Noseman a 5090 for this gig, if his 5-years-old-hardware is to blame?), no whitewater supported... and the capabilities of the tech were only described afterwards, when the viewers asked explicitly about it. So, all in all, this seemed like a very hastily thrown-together presentation of features that should have cooked some months longer. I really miss Chris Schmidt's presentation of new releases... he perfectly mixed the Wow-Factor about new possibilities with little tutorial aspects ("See? It's not even complicated to set up!") and some interesting technical insights, which made it even clearer what can and can't be done with new features. I hope they get him back for future releases. Edit: This 5-minute read is a much better presenttion of the june update than that whole 30min-Video-Stream 😕4 points
-
Found the solution! It's a bit unintuitive to set up, but works so far at least for a relatively simple setup like this. But, as I feared there is NO way to do this with the SDS intact. It just has to be converted. Maybe this is possible with scene nodes, but I don't know how those work so this is my solution for now. Convert your target and source mesh if it's an SDS. Click on the Phong Tag of each of them and press "Create Normal Tag". This will essentially "freeze" your Normals on the mesh in the current state, as the Normal Tag overrides the Phong Tag as long as it exists. Click on the Normal Tag on your target mesh and enable "Fields" in the "Transfer" menu. Switch to the "Fields" tab on the tag. You should see a "Freeze" layer. This layer is basically the current normals on your mesh as the Normal Tag saved it. Do not delete this, or you will lose all your normals and your mesh will appear black. Drag the Normal Tag from your source mesh into the fields list. You should see your normals change. Click on the "Variable Field Tag" layer you just created and switch to the "Layer" tab. Change "Mode" to "Average". This essentially takes your target normals and changes them so they are an average of all the source mesh normals that are within a specified radius. You can adjust how large this radius is with the "Radius" setting. You want this relatively low, so only a couple of points around your "transition area" are involved. You might not want a smooth transition either, depending on your usecase. In that case you can disable "Distance Falloff" This entire setup should look something like this: So what does this do? Essentially we are keeping our original normals (which is the "Freeze" layer) and are then overwriting the parts that we actually want via the "Variable Tag Field" layer. If this is set to anything other than "Average" it will not work as intended for this workflow. Averaging is exactly what we want here for a smooth transition. Now let's assume we only want ONE part of the mesh actually having this transition. For this we need to add a little more to the fields. We can, for example, add a polygon selection as a mask below the "Variable Tag Field" layer. Then a polygon selection like this: ... can mask out the transition to only happen at the base, not at the thin part: You can add another "Freeze" layer after all of this to "save" this setup and delete the source object, so you don't have to keep unnecessary objects in your scene. So yeah, this is a bit more involved than in Blender and unfortunately not at all as convenient and most importantly not as non-destructive as in Blender, but at least it's possible. Maybe something Maxon can work on in the future 😉 I think this should be part of the Normal Editing toolset with a couple of clicks instead of... this.3 points
-
2 points
-
Hrvoje Srdelic is our well known forum member and C4D guru. Always ready to help, often assisting in backend as well, we are truly happy to interview him. There is an understated modest quality about him and dedication put into C4D eco system is well known and recognized. Expert in many areas who helped shape the community for many years. Stuff about you : Former / current workplaces, Achievements, Interests and hobbies 46, married, with a wife, a kid, a mortgage. You could say semi-grown-up 🙂 In 2011, I joined the Maxon beta team and took on the testing role with the same focus, attention, and dedication I had given to my previous jobs and freelancing work. Those efforts did not go unnoticed, and I was offered a position at Maxon, where I continue to work to this day. A highlight of my career was receiving an Academy Award for MoGraph as part of the development team. Apart from 3D, I’ve fully adopted a Mediterranean lifestyle, spiced up with heavy gym sessions and outdoor activities like ziplining, canoeing, and mountain biking. Lately, I’ve been getting into programming, especially nodes. For relaxation, a good book with a good coffee is an unbeatable combo. Q: Describe yourself in a single word Tenacious Q: How did you get into C4D and 3D in general? Back in the 90s, after finishing my education, my first job was at a computer shop. This gave me a solid foundation in hardware and software, which served me well from then on. My next job at a design company sparked what you could call an obsession with 3D. I was captivated by the possibility of virtually constructing my own ideas. The fact that I could iterate and improve creations without real-world constraints was an uplifting experience that fueled endless enthusiasm. Over time, I became proficient in various 3D disciplines. Around 2010, I started freelancing and, after careful consideration, chose Cinema 4D as my main tool. Its simplicity, fast turnaround, parametric nature, and overall stability sealed the deal. As a bonus, I discovered this wonderful forum. I spent a lot of time asking questions and learning. Eventually, I realized I was answering questions frequently, mostly because I felt indebted to the community that helped shape my career. Back then, I noticed a lack of systematic, end-to-end learning materials for C4D, so I took the plunge and created the popular Vertex Pusher series. Q: Which area interests you most? Nodes, MoGraph, and simulations. Q: What other apps are you using and what for? I use various software daily, like GIMP, Camtasia, Vectorworks, and Adobe tools. Q: Which learning resources would you recommend? The CORE4D YouTube channel iis excellent, especially for technical topics. Of course, I have to mention Noseman and the Maxon training channel, wish I had that when I was starting 🙂 In the past, I relied on YouTube with heavy filtering to find quality content. At the time, there wasn’t a single go-to source. Q: Do you think talent is overrated and can be offset with hard work? Yes. Q: Thoughts on AI? It’s just a tool. You can get impressive results without requiring specific expertise, but I don’t see it replacing humans. Q: Tell us something we couldn’t possibly know about you. I’ve also worked in construction and as a lumberjack. At one point, I was also a life guard. Physical work brings me great joy, and my wife often keeps me busy with an endless list of household projects. Q: Top 3 wishes for C4D? I see C4D as a mature platform. It has evolved to the point where users can stay in C4D for most, if not all 3D content creation — unless very specific requirements need to be met. Refining and polishing it as a daily tool, simply progressing on all features and maintaining quality would be high on the list. There are directions I’d personally pursue much further, but they’re likely not what most users would want (remember, I love nodes and technical stuff!). Q: Message for CORE4D? Lately, there has been more content consumption and fewer conversations. With information so readily available, people are engaging less in sharing and talking with fellow artists. I find this counterproductive, as it disconnects people from the most satisfying part of being in a community. I encourage everyone to share their work, start or reply to topics, follow up, show examples, and network. Avoiding these actions is one reason many artists struggle to find gigs or jobs. You managed to model your first object? Rendered your first image? Created your first animation? Share it, and don't be surprised by the positive encouragement you will receive! Q: How can people contact you? I’m easily reachable via PM on this forum.2 points
-
it does feel like the overall reworked unified simulation system is for small scale work and provided you have the time to let it cache, but iterating can be painful? To be honest, why would someone want to even tackle this still in C4D over Houdini? I like they are adding value into C4D but it always feels like it's just short of being useful in most situations.2 points
-
Good points for sure, hopefully this gives them something to improve. Derek Kirk's look at C4D fluids is ok. https://youtu.be/FmI2sllgqS8?si=_QfU_7xxUFsmlvuT2 points
-
My impression of the liquid tech solely from the presentation video: Performance seems worse than xparticles and even the C4D Realflow plugin It seems like they wanted to make a solution for small-scale hero-style motion-graphic-liquids only? I didn't see an attempt at solving water bodies. Whitewater not available... whaaat? And no, remapping particle speed to color ain't the same thing. It is part of the new unified simulation system, which is actually cool. But the scope for liquids seems so limited, that I feel like it actually devalues the whole simulation systems, because we now see very severe performance limitiations... again. It feels a bit like pre-core-rewrite all-over 😞 I don't get what Maxon tried to achieve here?2 points
-
In what way does c4d not support network drives? Lots of our projects run directly from a NAS and all of our render farm systems pull directly from a synology NAS. Never had a single problem especially not since the network load/save code got a huge speed bump a few years ago. Regarding the WFH bit, to be honest it really depends on the ratio of office vs home stuff. Here we have a variety of people 99% in the office and the odd day here or there from home. Some who do 50/50 and some who do 99% home with the odd office day; and we all do it differently. Those who are mostly in the office just pop some files on a drive to take home when they know they wont be in the next day. Those are are mostly at home do likewise, just stick some work on a usb stick and bring it with you. For the 50:50 guys, its largely remote desktop. chrome remote desktop is ok and simple and free. The best option though is parsec. Now I know IT have said no... but... sometimes IT just need to be worked around. A little system reinstall here to regain admin control, the odd gift sent to IT so they look the other way there... Regarding the NAS, why even bother? You said you're the only one doing 3D, so why not just keep absolutely everything local on your work desktop? Or just bite the bullet. Grab a 2/4TB external ssd and do everything from that. Then sticking it in your pocket for wherever you're working. I mean if youre happy with the network drive speed then youll probably also be happy with a simple spinning hdd; a 4tb external 2.5" drive is £110, or £220 to make it an ssd.2 points
-
I'm guessing standalone and included in Maxon One with new bells and whistles. No idea if it will still be called Autograph but the roadmap for those guys is now likely easier as they won't have to replicate anything that is already in Red Giant. The past couple of years too I was wondering how they could afford development, now they don't have to worry as much. Some of the dedicated nerds on Reddit are really swearing about this but I don't get the hate. We get to see new cool stuff in the future and that is a win. Not much more to say about it, so now we can all wait and see what happens.2 points
-
So in Blender there is this very handy modifier called "Data Transfer" that makes it possible to dynamically transfer normals between different meshes. What would you need this for? Well, for example something like this: Meshes before transfer Meshes after transfer Note that these are EXACTLY the same meshes as before. It is not connected to the main body in any way, it's just the normals that were transferred for the row of points that touches the main body. Now in THEORY this is possible in C4D as well, in multiple ways, but I can't get it quite to work. I also tried exporting the mesh from Blender and importing it again in C4D, but that somehow breaks the shading and it doesn't look nearly as good as in Blender. Technique 1: VAMP Create Normal Tag on source mesh Open VAMP Set up Source and Target mesh, tick "Normals" and set "Space" to "Global" Press "Transfer Maps" This transfers the normals as expected, but I CANNOT decide what is transferred and what isn't. It looks like crap when ALL normals are transferred: And even IF you got that to look right: The Normal Tag breaks completely when you add a subdivision surface, so the only option would be to convert the SDS first, then do the normal transfer. This is significantly harder due to the amount of polygons involved though and kind of defeats the purpose of the parametric workflow in C4D. In Blender I could just put the SDS above the Data Transfer Modifier in the stack. Technique 2: Normal Editor This kind of works, but not really. As far as I can tell I can just adjust the normals of everything I selected in ONE direction. This works for the outer ring that touches the thick part of the model, but not for the part in front that wraps around the cylinder. The thick part just points in one direction, while the cylinder points in different directions. To add to that you have to somehow get the transitions between the normals to look great, which I just could not get to look right. And again, this breaks when SDS is involved, as the normals from the Normal Tag are ignored by the SDS. Technique 3: Transfer Attributes This is I think the best solution. You can transfer the normals between meshes via the attribute transfer function that is relatively new I think. The problem here is that I cannot seem to "mask" the transfer to only some points. It seems like the fields are acting as a value for the normals instead of being a mask for the value transfer. This is what I mean: It works JUST like the VAMP Normals transfer, just that it's "parametric" which is exactly what I want. But I cannot seem to restrict the transfer to only certain parts of the mesh. Also, of course, this breaks again when an SDS is involved, but I could at least have my normal transfer parametric after converting the SDS that way. If somebody could figure out how that works, if it's possible at all, that would be greatly appreciated 🙂 Stripped down Scene File is attached. Transfer.c4d1 point
-
IN the displacer shader add a noise in a layer shader with a transform shader n top. This has an angle peramater1 point
-
@Hrvoje Do you think this is possible with Scene Nodes somehow? I'd be happy if I didn't have to convert the SDS.1 point
-
Oh, I see, you would need essentially double constraint, that looks quite difficult. Are the nulls always on mesh surface? Essentially transforming any null or mesh itself should transform whole setup, right?1 point
-
Great work! Looks impressive considering the current state of the system.1 point
-
You will have to transform all targets as well. With two axes it is not problematic, you can use constrains multi_axis.c4d1 point
-
I'm sure Autodesk will get no bad press at all for integrating AI character animation tools in their software that is known for having amazing character animation tools. This will surely go over well with the animators using this software. lol, I knew it:1 point
-
Won't be of use, since this is a limitation that depends on the network setup. A file that doesn't work for me will work for you if you never have this problem. Years ago in another job we worked on network storage with C4D and it worked as well. Here, it doesn't, and according to support this is perfectly normal. I have never used this feature, but I'll have a look. I fear that I will not be able to use this due to IT guidelines etc. though 😛 Too much data, not to mention I have a library that needs to be synced all the time as well. Cloud storage is just not feasible with the amount of data and the internet speeds here.1 point
-
1 point
-
Oh, I did not expect this to be available already 🙂 I am honored that I got the privilege to do the interview, thanks a lot to Rob @LLS That is a good test for AI 🙂1 point
-
awesome interview, especially the message for core4D. I agree, people definately need to interact, share and start things more(myself included). i will say though this interview is incomplete without this one final question: @Hrvoje how the hell do we pronounce your name!!!??? 😭 🤣1 point
-
Here is quite elaborate setup - grid move. Clones are traveling along edges and they avoid intersecting with each other. Something that is not readily available in MoGraph 256_Grid_Move.c4d1 point
-
You could store active projects in a Onedrive-Account and sync that seamlessly. Much better working speed with local files on SSD vs NAS storage or VPN. Library could stay on NAS.1 point
-
https://support.maxon.net/hc/en-us/articles/8658038724124-Cinema-4D-2025-3-June-18-2025 jeebus... UV tool refresh. UDIMS.. texel density... they listened. hoping they continue with Laubwerk and add wind/dynamics.1 point
-
chuck me a simple scene which has this problem and ill see if theres anything obvious. Just a cube with a texture which complains about it missing. Don't use 'save project', just copy the file as-is so no file paths get changed. Why not turn on windows/mac backups? both OS's have built in tools where you can roll back a file to any previous version from weeks or months ago. In fact use the network drive as the backup location that way you get local performance and the backups are all RAIDed up. Nothing stops you from having your external drive backed up. Its just a drive like any other. Set it to backup to your local machine or the NAS when connected, it can silently do this in the background. Here I run most projects from an external 4tb SSD, that drive backs up to a local 18tb hdd whilst connected. On top of this there's the backups via proliferation; ie. theres a copy of most stuff on the render master, theres a copy on the Raided NAS when projects are finished. Even without my own official backup, the data gets copied around so much that its always in multiple places.1 point
-
yes, maxon seems very excited and confident with this release to the point where they're willing to show sneak peaks. it's a very exciting release im looking forward to(although may not be able to upgrade straight away)... both fluids coming and a uv overhaul are already big ideas... i'm loving the overhauls especially. cinema4d killing it recently and i'm there for it. also, im loving using scene nodes and excited for improvements in that area as well...1 point
-
It's funny you ask this, as I was under the same impression because years ago when I worked with C4D in an office environment it was no problem at all, but according to Maxon Support working on network drives was never and is not officially supported and not recommended! I opened a ticket with them because every single one of my scenes just 404ed every single texture I connected to the scene when pressing render, even though I could clearly see the previews on the nodes (so it found the files). This is what I got from them: I was caught off guard by this and basically asked them if they're fucking kidding me, to which they replied: So not only does it not work at my company, there is not even some kind of workaround they can give me a side from "change some settings on your NAS and hope it works lol". Because I'm a scatterbrain and I don't trust myself overwriting stuff with older stuff that I cannot recover because I have no backups, which already happened. Due to it being on the NAS though, IT could recover an older file from a week ago. Hehe, I'm good with IT, that's not the problem. They are just ridiculously overworked and don't have time to invest hours into something that would make ONE guys workflow easier in a company with 200+ employees. It's a medical company, so the amount of stuff that has to be tested / verified with new software is frankly absolutely ridiculous and takes a crazy amount of time. I do not like the thought of not having at least one separate backup of my data that is updated every day one bit. Data that is not backed up might as well not exist. I have learned this the hard way in the past.1 point
-
I wanted to test out ChatGPT converting RS Standard Materials to OpenPBR. After 30 minutes of correcting it I gave up. I changed the Redshift_Material_ID to 5703 after it suggested I check the material ID through another script it created. I still couldn't get it to work. I ended up running out of my free credits. Is this functionality going to be in the next C4D 2025 update? I had submitted a request on Maxon's site but it goes into the void. It did mention one thing: If Maxon exposes official Redshift Python API for 2025+, it might become possible, but as of now, it’s limited. ChatGPT called out Maxon? This was one of the versions it created. The first iteration actually had a section for specific texture inputs: transfer_texture(rs_material, "diffuse_color", openpbr_node, "base_color") transfer_texture(rs_material, "refl_roughness", openpbr_node, "roughness") transfer_texture(rs_material, "bump_input", openpbr_node, "normal") ---------------------------------------------------------- import c4d REDSHIFT_MATERIAL_ID = 1036229 # Redshift material type ID (adjust if needed) OPENPBR_MATERIAL_ID = 1036227 # OpenPBR material type ID (adjust if needed) # The node ID string to identify Redshift node graph — you may need to adjust this based on your scene/plugins REDSHIFT_NODESPACE_STR = "com.redshift3d.redshift3dnodegraph" def find_nodegraph_by_id(root_node, id_string): """ Recursively search nodes starting from root_node to find nodegraph by ID string """ if not root_node: return None # Check current node's type ID string if root_node.GetTypeName() == id_string: return root_node # Recursively check children for i in range(root_node.GetDownCount()): child = root_node.GetDown(i) found = find_nodegraph_by_id(child, id_string) if found: return found return None def convert_redshift_to_openpbr(): doc = c4d.documents.GetActiveDocument() mats = doc.GetMaterials() changed = False for mat in mats: print(f"Checking material: {mat.GetName()}") # Check if this is a Redshift material if mat.GetType() != REDSHIFT_MATERIAL_ID: print(" - Not a Redshift material, skipping.") continue # Get Node Material reference nodemat = mat.GetNodeMaterialReference() if not nodemat: print(" - No node material found.") continue # Get root node root_node = nodemat.GetRootNode() if not root_node: print(" - No root node found.") continue # Find Redshift node graph inside root node by ID string rs_graph = find_nodegraph_by_id(root_node, REDSHIFT_NODESPACE_STR) if not rs_graph: print(" - No Redshift node graph found.") continue # Here you would copy or remap the textures/nodes from Redshift graph to OpenPBR graph # For demo, just create a new OpenPBR material and replace the old material in document print(" - Creating new OpenPBR material.") new_mat = c4d.BaseMaterial(OPENPBR_MATERIAL_ID) new_mat.SetName(mat.GetName() + "_OpenPBR") # Insert new material doc.InsertMaterial(new_mat) # Optional: Transfer textures/connections here if possible # Replace material in objects (simplified example) for obj in doc.GetObjects(): if obj.GetMaterial() == mat: obj.SetMaterial(new_mat) # Remove old Redshift material (optional) doc.RemoveMaterial(mat) changed = True print(f" - Converted material: {mat.GetName()} to {new_mat.GetName()}") if changed: c4d.EventAdd() print("Conversion complete, scene updated.") else: print("No Redshift materials found to convert.") if __name__ == "__main__": convert_redshift_to_openpbr()1 point
-
1 point
-
Thank you so very much. Seems I may have resolved the matter by using version 10 which my boot sequence is required to be compatible with Redshift. Again, thank you for your kind words, you are the best!!!!! DDO1 point
-
1 point
-
Interesting to see if they will build a C4D Compositing tool or a new standalone based on Autograph and Red Giant1 point
-
I wanted to move a lot of free Blender objects and scenes into C4D. I tried a free IO plugin on Gumroad which could copy/paste between Blender and C4D apparently using FBX. Many times the materials didn't have textures or were referencing ones that were misnamed in the conversion. I decided to try USD instead and it seemed to work better. Full scenes of objects w/materials came across including converted RS materials. While it wasn't perfect as any materials with transparency had that parameter lost in the process, it was better than straight FBX. I had ChatGPT then come up with a couple of scripts, one for Blender and the other for C4D. The one from blender would simply copy selected geo a temp USD file. In C4D, the script would copy that temp file into the existing scene. I eventually added to it to allow the Blender scene to unpack and write any textures to disk before exporting the USD. That's because some Blender scene files have their textures packed inside the file and it seemed like the USD export didn't grab them. Somewhere along the way with the rewrites I had it do, it decided to export the whole scene (incl cameras). I'm ok with that for now. I also had ChatGPT create an icon for the C4D script. I didn't do one for blender since the devs hate any real customization like putting custom buttons on the UI or other fun stuff (without coding it with serpens). I know you can add an icon in the addon but I hate the UX of the addon tabs anyways. Included below is the bundle. Install in blender as an addon. You'll find the addon in the addons tabs. Install in C4D as a script. Apply the icon to the script and place the script anywhere on the UI if you want. It asked if I wanted to create it as a plugin but I didn't go that route. NOTE: This is free. Do whatever with it. During the modifications of the scripts in ChatGPT I specifically told it I'm using Blender 4.4.3 and C4D 2025.2.1. I can't verify this will work on older/future versions of either software. It's the environment I'm using now so I built it as such. It does make me wonder if some of the old scripts/plugins could be reinvigorated w/GPT to make them work with newer versions of C4D. usd_io_blender and c4d.zip1 point
-
Autograph isn't tainted, just like ZBrush wasn't tainted, but Maxon dropped the name Pixologic and I expect them to drop the name Left Angle. They could easily dump the name Autograph as well though. I guess. One of Autograph's most recent touted features was the ability to import AE files and replicate whatever was going there in Autograph, barring various plug-ins. Maxon are heavily invested in AE plug-ins but it wouldn't hurt to make them all Autograph compatible, and to tell some newcomers, yeah you could sub to Adobe if you want, but Autograph (or whatever the new name is) has most of that stuff now, works 100% with our Red Giant and C4D stuff, and you don't need a second subscription, just ours. Would this make them more bucks, or less? RedGiant already has RedGiant staff working on RedGiant comping plugins, do they need more people doing the same? The Left Angle guys just spent three years invested in doing something different to AE, rather than just making AE plugins (which would have been easier) in a compositor similar to AE but conceived as fresher and hopefully with a more ambitious road map. Should they now (a) dump all that work getting away from AE and go back to making AE plugins, or (b) keep on trying to do the new stuff they were doing when Maxon bought them. More to the point, which of those two options do you think they want to do? If Maxon just wanted to buy all the Left Angle staff and have them do Red Giant plugins, there wouldn't have been much need to put out a press release with the word 'Autograph' in the headline, and a big fat screenshot of the Autograph app at the top of the page. So Left Angle has gone the way of the dodo, but I think the chances of Autograph coming back in some fresh rebranded form are a fair bit higher. Possibly the name will change totally, the UI will get the Maxon treatment. If the 'something new' is something with a timeline, an object browser, a compositing window, various generators and text tools and so on, it'd be daft to make all that a plug-in inside AE rather than presenting the whole lot, tidied up, as a new app hopefully presenting a competitive alternative. The few Autograph threads on the AE Reddit page (there were a handful) all said "Please give us an alternative to AE, it's such a crashy POS". Then when I do my annual ritual of checking the official AE forums, the posts there under most updates read "Has this app become even worse? This is the crashiest version in years." There is room for an alternative. McGravran was previously the director of engineering for After Effects, and the Left Angle guys had heavily planned to make a modern AE competitor, and neither of them seem lazy or unambitious. The next 12 months will be interesting.1 point
-
1 point
-
Maxon just posted their Arch Viz reel. It seems like some of the work in the reel bleeds into interior design and product vis too. Very elegant.1 point
-
One thing that irks me with every simulation tech I have used so far: Most of the time I want to emulate the movement of something from the real world, as closely as I can. And most of the time it's endless fiddling with somewhat abstract parameters to achieve the realistic movement I know from the physical object. It really should be the other way around. Let me make my object, define it's size and weight and then let me choose from fine-tuned, well-tested presets. That's water! Or milk. Or sand. And then offer an advanced mode where I can fiddle around with advection and all the stuff 'under the hood'. But really, let me express my artistic vision through my everyday-knowledge of the real world. I don't want to study particle physics for any simple sim I do.1 point
-
1 point
-
Sorry to disagree again but it is the exact opposite. MD gives enormously precise physics control by having a seperate physics tab. Modeling and physics are precisely controlled. In the current new C4D way I am basically blind guessing how low my low poly cage needs to be. How would I know the mesh density of a certain fabric style? I am forced to guess around with multiple poly density cages until I get there somehow. 100 polys? 500? 2500? I am realizing you guys designed it for Mograph stuff in mind. I initially thought Cloth is for characters. Guess that’s where we differ.1 point
-
Ok! Then provide some samples in the asset browser, please. I think it's always a good idea to let users dissect examples or to give them a good starting point. Especially when parameters are sometimes somewhat unclear (like lower substeps make for a more bendy material)1 point