Daniel Seebacher
Registered Member-
Posts
73 -
Joined
-
Last visited
-
Days Won
4
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by Daniel Seebacher
-
How to save an image from Arnold Renderer in Cinema 4D ?
Daniel Seebacher replied to Nkzmi Dev's topic in Cinema 4D
Happy it worked. It was a mere shot in the dark 🙂 Sometimes it's the little things that can be missed so easily... -
How to save an image from Arnold Renderer in Cinema 4D ?
Daniel Seebacher replied to Nkzmi Dev's topic in Cinema 4D
In the above screenshot, the format dropdown is empty. Maybe that's why it's not working? Try filling out the dropdown and save again? -
Realflow standalone does this too with Hybrido for large- scale fluids. You first simulate the fluid, then you can simulate additional (and much faster) separate passes such as foam or mist which reuse cached data from the main fluid sim. Then you usually mesh the core fluid and export foam or mist as particle point clouds. I've always wondered how you would render these particles then to look like foam or mist. Krakatoa can do this as a point cloud renderer, but I feel the plugin hasn't really seen many upgrades in recent years for any DCC. And the last supported C4D version of the Krakatoa plugin was R21. I don't know of any alternative for rendering convincing foam from particles though.
-
Avatar 2: The Way of Water Trailer now at IMDB
Daniel Seebacher replied to 3D-Pangel's topic in Discussions
Davy Jones from PotC2? Absolutely amazing. If I recall correctly, every tentacle was rigged and animated individually. And yet, you could still see Bill Nighy's facial expressions. -
Avatar 2: The Way of Water Trailer now at IMDB
Daniel Seebacher replied to 3D-Pangel's topic in Discussions
I found the first Avatar barely watchable due to the awful script and the flat cliché characters. Disney did it better with Pocahontas. Have yet to watch the trailer for the second one. James Cameron has to try hard to convince me to give it a shot in theaters. -
Does it work if you put the light into an instance object and then mirror the instance object?
-
@Kelly Lim @HappyPolygon The above solution works (Import OpenVDB, convert to polygons, fill the polygon object with the Volume Builder with new voxels). I tried it with a small VDB that I generated and got good results. Don't forget to link the Volume Builder to the density channel of the Redshift volume or you will not see anything 🙂
-
No, the volume builder needs a closed mesh to then fill its inside with voxels (in fog mode). It can't add new voxels to an already voxelized object. So my idea was to import the OpenVDB, then mesh it with a mesher and then fill this newly created polygon mesh with potentially more voxels than before. But with the constant conversion between voxels and polygons, C4D chugs on my machine. I am looking for a smaller OpenVDB file currently to test this process once more.
-
I never played either of the two 😆 Okay, I can report back with partial success with your suggestion of importing the VDB directly as an object. This sequence here works to at least get the cloud rendered: However, I am having trouble editing the settings in the Volume builder to refill the meshed OpenVDB with new voxels appropriately. But that might also be due to the large size of my VDB file. I would need to try again with a small one.
-
I could never get the native C4D volume objects to work in combination with Redshift. But I'll try it again and use the raw import instead of the volume builder as you suggested. Maybe this would be a workaround. I'll report back 🙂 By lowering the absorption and making the cloud more transparent, the cubic look also disappears quite well. With high absorption (very opaque cloud): With low absorption (the problem is still noticeable, but not nearly as much)
-
Unfortunately no, because you cannot add more resolution to an OpenVDB than what is in the file. Subdividing the OpenVDB more during render time (e.g. through a smaller step size) will add more accuracy to lighting calculation, but it will not change the shape. If a volume looks pixellated, you need a higher resolution file. Redshift currently does not have Volume displacement which could add procedural noise to the volume and hide low-resolution details. It's on the Redshift roadmap, though.
-
You don' t need Cinema's volume builder at all to import a volume. The Redshift volume object is enough. Load the VDB into the RS volume object, then create a RS volume material and inside the material, select any of the available channels from the VDB file for the scattering channel in the volume material (usually density). Then check your Redshift lightsources in the scene. Any light you want to influence the volume needs to have its volume contribution scale parameter set to a value above 0. If none of the lights have a contribution scale above 0, the volume will not be rendered. From the screenshots you posted, I assume that you did setup all of this already. Then just delete the volume builder and use only the Redshift volume object and you should be good to go.
-
Nope, it just looks like it would 🙂 The USD file just contains the indices and positions of vertices that make up the mesh, just like a FBX or Alembic file would. There is no geometry node graph contained inside the file. But it's great that you had the perception that there was one, because that's the goal - to produce the illusion that you are truely editing the exported mesh inside of PF. The USD file contains the aforementioned parameter values and the path to the original PlantFactory scene file where the node graph is contained. The plugin in OV triggers a launch of PlantFactory, then it asks the application to open the scene file with the node graph that was used to export that species, and once the scene has loaded, it passes the metadata from the USD file to the scene and recreates the plant variation. It's basically like a macro that you run. This workflow breaks if you modified the original source scene and node graph since you last exported the USD file (because then the parameter metadata in the USD file no longer matches the content of the plant graph) or you move the scene file to a different folder. In the latter case, you'd be prompted by the application to select the source file manually. I also believe we are forbidding editing the original PlantCatalog scene file for each model so as not to break this workflow. If you want to make changes to the scene, you need to save a copy which can then be modified. I need to ask again to be sure... Again, not being an expert on this, please take everything I say with a grain of salt. It's based on the discussions with our devs and the bits and pieces I as a non-developer understand from the official Pixar USD specification (Introduction to USD — Universal Scene Description 22.03 documentation (pixar.com)). I believe you need to distinguish between geometry and shader data / materials. Geometry data is stored in a baked format (as mentioned above for example with the vertex indices) just like with other 3D formats with support for nurbs, polygons, splines, primitives and more. The way this information is packed with its database-like structure and hierarchical approach is more efficient to unpack / stream than with other file formats though. Materials in USD can be described arbitrarily. I think it's a bit like XML (comparing apples and oranges here, but still) - there's a basic simple shader with a few properties such as color, normal maps etc. and you can expand upon this concept at will and add new custom properties found in a specific software. The key will be that the software, into which you import the USD file, has an interpreter for those custom properties and knows what to recreate in the material when reading them. For example, if Redshift were to write custom material properties into USD files such as RSMaterialRefractionIndex (I am making this property up as I type...) and you then import a USD file into a Redshift scene that has this custom property, the Redshift USD interpreter would need to read this and understand "Oh, I know this property. It's the value for the refractive index in the transmission group in a RS material node. So I am mapping it to this parameter." Without this interpreter, the value wouldn't mean anything and it would be ignored when importing the object. Generally, if an application comes across a material parameter it doesn't know, it either ignores it if it's something optional or it falls back on the USD Standard Surface Shader definition and generates a generic material upon object import. Because everything is hierarchical in USD, I suppose that by embedding layers and dependencies, you can describe whole material node networks that would be recreated by the target application upon reading the file. Not 100% sure though. So to answer your question, yes, I believe it would be possible to export a USD file from VUE or any other DCC application that has an embedded native Redshift, Octane, Arnold material etc., provided that the companies behind those engines supply specifications / documentation on which values to write into a USD file that could be read specifically by their material systems. In our products, this is not how we currently do it. To create Redshift materials, you have to go through our plugins where we directly query the render engine's SDKs and let them create their material node graphs themselves with values and image maps that we feed to them. If you mean sending an exported USD mesh that you imported into C4D back to VUE/PF, then no. This is an Omniverse-only feature, because it would require a plugin similar to the extension that we offer for OV. But since we already have plugins for a few apps, I will ask internally if this functionality from OV could be ported over. Finally, there's also MaterialX as a second material option. MaterialX is one of several shading languages (with MDL and OSL being the other two popular ones which I think can also be derived from a MaterialX graph) which are gaining more and more support from multiple apps and render engines. With MaterialX, you can code the actual material nodes in a material graph, for example a custom noise node. When you then load an asset with a MaterialX material in any supported application, not only is the node graph recreated, but also the nodes themselves. Each application / render engine would then show the exact same custom noise node with the same controls and parameters. There would no longer be a Redshift noise node and a Cycles noise node and an Octane noise node. No more proprietary materials. Everything would be standardized. Because not all nodes all applications have can be easily recreated through MaterialX libraries, MaterialX is usually its own type of material and you couldn't e.g. create a Redshift material with MaterialX nodes in there. You would have to decide whether you want to go with a proprietary Redshift material with all capabilities, but readable only by Redshift, or with a MaterialX material with not quite as many options, but readable by any app that supported MaterialX. USD supports MaterialX, so the deeper the MaterialX integration becomes across applications and render engines, the easier and more seamless the material exchange will become.
-
Reading the initial post, I suppose we will see a live sync / connector for OV from C4D very soon, probably with S26 next month: New Third-Party Connections for Adobe Substance 3D Material Extension and Painter Connector, Epic Games Unreal Engine Connector and Maxon Cinema 4D will enable live-sync workflows between third-party apps and Omniverse. USD does not require RTX technology, only OV as an application does. I am not an expert on USD's file structure, so a developer might be able to provide you with a more detailed answer. But to my knowledge, USD supports all major features from Alembic and FBX with a few smaller differences here and there (vertex colors, for example, are saved as "display colors" inside a USD file). After all, the goal of USD for Pixar was to provide a universal file format for describing any type of 3D scene, hence the name "Universal Scene Description". The key take away from USD's file structure is its similarity to a database. It is much, much easier to store "custom" properties that are non-standard than in other 3D formats and its database structure makes it very efficient for streaming data into a scene, which is probably the main reason why Nvidia settled on it as the base for OV. For example, Redshift will be using this in the future to custom code a Redshift material into USD without having to fallback on a generic "Standard Surface" shader from generic USD files. I can give you a practical example what we are doing for our own apps with the USD database structure, something that would not be possible with any other 3D format. For PlantFactory / PlantCatalog plants, we store the parameter values that were used when the plant was exported as metadata in the USD file. This includes the plant seed for that particular variation, season, maturity, health and any other custom parameter that plant might have. We are then using a PlantFactory Omniverse Extension plugin that can send the exported USD file back to PF. In essence, the Extension reads the metadata from the USD file, then loads the plant scene file in PF and recreates the exact plant variation that was exported as USD. You then make changes to the plant parameters and, using the live sync, trigger a re-export of the USD asset, thus updating it inside OV. This essentially "keeps the procedurality" of the baked, exported asset, because as long as you have the scene file, you can go back and non-destructively reconstruct the scene state used for exporting that asset. Here's a video about this process, taken from a live stream from the November GTC (relevant part starts at 8:05 and ends at 10:56) Yes, if you export to a virtual drive, this drive is connected to the Nucleus database. This is the reason why you would mount that drive to Windows. But you do not have to export stuff to the OV core / virtual drives. This is only required if you want to use the live sync where things update in real time in the OV scene. If you do not care about this, you can also export USD files to a regular hard drive and import them like you would import any other object in a DCC application. If you then make changes to the object and re-export, you need to manually re-import the USD file into the OV scene, as there would be no live update. USD is open source and was invented by Pixar. There's no locking in of content. Of course you can save a scene that contains an imported USD file as a *.c4d scene. USD is just another 3D file format that can do more than other formats in some aspects.
-
In broad terms, Omniverse is a suite of freemium applications (free for personal use, payed for companies as an Enterprise version). There are applications for developing your own apps and games based on OV technology, apps for viewing archviz scenes and apps for building scenes. There's a launcher from where you can install what you need. Omniverse has its own custom realtime RTX raytracer and a fast path tracer as well as its own physics system for rigid and soft bodies. These are more or less the key features you'd use when you want to use OV for building and rendering scenes. Omniverse requires a Nvidia RTX GPU to run. All apps are based completely on USD. Omniverse apps do not accept anything BUT USD. In the background, the apps use a database core called Nucleus. This is essentially a localhost server that you can setup only for your machine or you can setup a true server to share with team mates online. There's also an app called OV drive for quickly creating a virtual harddrive in Windows Explorer that is linked to the Nucleus core. Software companies can develop connectors (= plugins / live connections) for OV. With these plugins, you then export your entire scene or single assets from the scene as USD files to the Nucleus core, e.g. by exporting them to the virtual Omniverse drive. Then you import the USD assets from the Nucleus core folder structure to your OV scene. Once exported, you can continue editing the scene in your DCC application and with every change you make, be it a modelling change, lighting, materials, adding new objects (...you name it), a re-export of the scene as USD (thus overwriting the existing USD asset) to Nucleus is triggered in the background. The nucleus core keeps track of only the differences since the last export and streams them to the OV scene, causing the assets to update in near real-time in OV. So if you are using a nucleus server that is online, one team member working from home can edit an object in Maya, another team member somewhere else does something in C4D and another person textures something in Substance Painter. With the connectors from these apps, all team members can see what the others are doing live on their own screen through the scene in OV. Because OV is based on USD, it is compatible with Pixar's USD Hydra delegate (essentially a bridge for render engines): Any render engine that supports Hydra can be plugged into OV and used for rendering the scene instead of OV's default path tracer. The main idea behind OV is thus collaboration across multiple apps in real-time where OV can serve as both a bridge and an endpoint for rendering the final scene. Because everything is a baked, exported asset, there are no incompatibilities between apps. USD supports MaterialX, so I suppose we'll see more MaterialX support across render engines in the future in addition to each engine's custom materials. Now read the initial post again and it should make more sense 🙂 Edit: German press release from Maxon if you want to know more about C4D / Redshift and OV: Team-basierte Zusammenarbeit für Maxon-Produkte jetzt mit NVIDIA…
-
Insydium Sneak Peeks - 2022 (Terraform, Mesh tools)
Daniel Seebacher replied to 3D-Pangel's topic in Plugins
Oh my... thank you, Dave. I am blushing 😅 -
Insydium Sneak Peeks - 2022 (Terraform, Mesh tools)
Daniel Seebacher replied to 3D-Pangel's topic in Plugins
Hello all, e-on employee here. While I am a private user in this forum, let me take this opportunity to answer some lingering questions / clear up some confusions raised in this discussion. I apologize for side-tracking the Insydium topic in this post and I hope the mods will forgive me for this. Procedural terrains: Yes, Cairyn and Happy Polygon are right. It's both a dynamic tesselation within the camera frustrum based on camera distance and a fractal formula, seamlessly blending between the different areas. Because this is a render-time only effect, the dynamic resolution is not exportable. To export a procedural terrain, you need to set the target export resolution and then it will be tesselated and baked accordingly into a mesh. If you have an infinite procedural terrain, you setup an export zone in the scene and everything within that zone is then exported as the terrain mesh at the target resolution. Alternatively, you can export the generated heightmap instead of the mesh at the desired image resolution. VUE vs. other terrain apps Yes and no. I'll admit that VUE terrains are not quite on par with the capabilities found in Gaea / WM / WC. These applications have more specialized nodes such as folding, more optimized erosion algorithms (our erosion is an actual fluid simulation algorithm and thus slow) and a few integration things such as splatmap calculation or extracting fine terrain details as a separate normal map. But VUE's terrains still offer nice results and they are fully node-based, whether you're using heightfield or procedural terrains. There are more than 60 different noises alone, going far beyond basic Voronoi and perlin stuff, and more than 140 nodes in total for usage across terrains and materials. Sculpting on top of the computed heightfield is possible, but optional. Here's a random example terrain I just threw together in five minutes with very few nodes. In the screenshot, you can see the mask outputs from the erosion node which can be used for texturing. Other nodes offer masks outputs too. Edit: Screenshot quality was crippled during upload, sorry 😞 The masks can be exported, although it requires a bit of a workaround (you need to connect the mask output to the terrain altitude output instead of the main output of the heightfield graph. Then you can export the result from the terrain editor). We are working on a proper export method as well as on new terrain features for the future. Integration plugins (former xStream product line) The plugins used to be quite unstable in the past, so I understand the negative experiences you have had. However, this was a long time ago (10+ years) and they have improved a lot, as well as the stability of VUE itself. I am not saying that you will not experience an occasional crash here and there, but it happens very seldomly. One thing to note is that the plugins don't work like a regular plugin that would use the native UI of the host app and would be fully integrated. Instead, the plugins run as a sandboxed software-within-a-software where C4D, Maya or 3DS Max act as the "host application". When you load a VUE scene (or create one from scratch inside of the plugin), you have to use VUE's World Browser and tools for handling objects and not the C4D Object Manager, Material manager and modelling tools. Everything displayed in the C4D object manager and in the viewport are low-resolution proxy objects that represent the procedural VUE meshes. So if you are editing a terrain in the VUE terrain editor, for example, but deleting the terrain proxy object at the same time from the C4D Object Manager, you will most likely encounter instabilities at best and a crash at worst, because the VUE terrain just lost its "representative partner" in the C4D scene. This means that deleting objects, assigning materials etc. needs to be done from within the floating VUE dialogues and you need to understand what you can do with native host tools and what you can't do. Aside from the creation process with the plugin, the focus of the plugins has shifted. In the past, there were no export capabilities and you could only render VUE scenes either inside of VUE or inside of the host app using the plugins and the native render engine (Standard / Physical in case of C4D). VUE elements and atmospheres were rendered by the VUE render engine and native elements by the host app. The renderers communicated with each other to match GI, light intensities and secondary rays such as reflection and refraction. While this approach is still available in the plugins, it is not future proof and limited to the aforementioned native render engines only. This is why you can now convert any procedural element from the VUE scene into native C4D objects (as baked polygon meshes, with or without baked animation) from within the plugin, and the plugin will automatically create materials for the render engine that is currently selected in the C4D render settings. Next to standard / physical in C4D, we support Arnold, V-Ray and most recently Redshift. Redshift is currently the most detailed conversion, where we recreate the parameters from the VUE material editor and map them to native Redshift nodes and settings in a Redshift material graph. Procedural materials are baked to texture maps and existing texture maps are just dumped to the project folder and linked accordingly inside the node graph. We are working on bringing Arnold and V-Ray conversion to the same level of detail as the Redshift conversion in one of the next versions. Currently, we link the main channels in these material conversions (roughness, diffuse, normal, AO etc.), but we do not recreate individual parameters in their native material graphs just yet. As for the atmosphere, the sky can be exported as a HDRI in various shapes (cubic, panoramic...) and clouds can be baked to OpenVDB. So it's now possible to use the plugin for converting / exporting everything to another app or pipeline and then cutting ties with the original VUE scene altogether. -
Nope, other than that both make plants, there's nothing common between PlantFactory and Taiao. The workflows are different. As always, Insydium uses established workflows coming from X-Particles and from C4D itself (e.g. the layer workflow and the reaction to particles), so it's closer to running a growth simulation. PlantFactory on the other hand is node based with lots of botanical, vegetation-specific settings, plugins and specific integration features. The approach and the focuses of the two tools are completely different.
-
I've been working with OV for about a year now. OV Create is really cool and I think people misunderstand the concept. It is not meant to and will never replace dedicated DCCs. It's a cloud-based collaboration platform that offers real-time visualization of scenes that multiple people across the globe are working on at the same time. The built-in render engine is very fast and OV is also compatible with Pixar's Hydra, meaning you can plug in any hydra-compatible render engine. It is an alternative to sending assets back and forth with export and import. And USD is a great foundation.
-
Work in progress (materials are not done yet) - a butterfly bush made in PlantFactory.
-
Announcement Maxon Announces an Agreement to Acquire ZBrush
Daniel Seebacher replied to HappyPolygon's topic in Discussions
As a non-Z-Brush user who has only a limited amount of sculpting experience, what is it that sets Z-Brush's sculpting features apart from any other sculpting brushes in generic DCC apps such as C4D or Blender? Again, without much knowledge in this field, I would think that a wide range of default brushes (grab, pull, flatten etc.), symmetry functionality and masks / stamps should allow you to do most things you have in mind? What is the in-depth functionality that takes Z-Brush beyond these standard capabilites? -
Ooops, I somehow missed your replies. Thanks, grabbed it by now 🙂
-
Oh, nice! Then this must have been changed recently with the renaming of the apps and the business model change. I remember that when the software was called Alchemist, you couldn't get it. I need to have a look 🙂
-
Sampler (Alchemist) is a subscription only product and I think that there is no option to purchase a separate access to Substance's asset library.
-
I think that maintenance is dated back to when it lapsed. Your maintenance expired in May 2020. This was 18 months ago. Which means when you purchase maintenance now (which will cost you more than if you had done it within your renewal period), 18 out of these 24 months will have already been "used" and your extended maintenance will then expire in May 2022, as stated in red in your post. Potentially, you might have saved some money if you had purchased your first extension up to May 2021 and then your second extension up to May 2022 in time, compared to purchasing them both now at once during the last chance period. But I don't know how much of a price difference it would have made.