Jump to content

bentraje

Registered Member
  • Posts

    1,621
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by bentraje

  1. I mean there is a fresnel node already in redshift. Or are you referring to a different thing?
  2. Hi @pilF I'm answering the question for reference. Just in case sometime in the future another member encounters a problem. RE: So, now the face rig is transform constrained to a newly created child of the BD_Head_int? But that child does not have the same position as its parent. Does the position not matter? Ah what you have described is the "parent constraint". There's no default parent constraint in C4D. In Maya it is default as part of its constraint. I just recreated it manually. The way it is necessary compared to the position and rotation constraint is that it respects the orientation space of you are constraining. Previously, you have rotated the body head control to the back, but the face head control rotates to the front or something not 1on1. that's the problem. Parent constraint solves that.
  3. Yea this why I'm not a fan of rigging in Cinema4D. That god-awful priority. Every DCC has it but its not as horrendous as C4D due to its hierarchical nature. You literally don't know the order of things that are being executed unlike in Maya being a node centric DCC. Anyhow, can you send the file to check? Easier to check that way. Just delete your mesh. Retain the rig.
  4. @Igor Ah gotcha thanks. Took me awhile to figure it out. The blend mode is not on the Edit SOP parameter per se. It's on the viewport menu haha.
  5. Hi I'm assuming the "Use Slide on Surface" on Edit Mode functions like the Slide Tool in C4D. It does work but it is somehow unreliable since it distorts my topology. You can check an illustration of the problem here: 1728756310_SlideonSurfaceonEditNode.mp4
  6. @Stuggz How about this one? Check if the GPU/CPU resourcers are being used for the past 20 minutes. If not. Shutdown. Haven't actually used personally. But it seems capable.
  7. RE: literally the entire scene is being cached/written Not sure about "entire scene" unless you are using scene import (as opposed to SOP import) That said all LOPS node are USD. You can verify this by RMB on a node. There is an option to see the actual USD code. As to it being converted to USD at realtime, I think only when you are rendering at Karma (that's why there is a bit of lag when you switch from OpenGL to Karma) But again this process of conversion happens to other third party renderers even outside USD Houdini. For example, Redshift at C4D. Upon rendering, it collect C4Ds natives and convert them to what Redshift understands. So there is still a bit of lag when loading those data. So in this case, anything non-native renderer to a certain DCC is "inefficient" by definition. It just a degree of implementation probably. In case, Redshift on Solaris Houdini is terribly slow at the moment. Not optimized probably. In RS forums, there are several claims that RS Solaris Houdini is 5-10 times slower on RS Renderview Houdini. I can't confirm the figures but when I'm trying it, Solaris Redshift is noticeably slow.
  8. I'm no longer in Cineversity since its subscription-exclusive now. But when I was still on it and I got that error, it doesn't take more than a day before that error gets resolved. Last time I got that was when they were doing some server upgrades.
  9. It's actually interesting how these softwares came about. Like MARI was developed to accommodate Avatar's high res creation. People realized its power but its limited to film and high end production (not only on its price tag but on the actual workflow). There was a market gap. Hence, Substance Painter and the old Quixel Photoshop Plugin was made lol. (I mean Bodypaint was the standard back then but they lack progress so yea, Mari was born. lol RIP Bodypaint.) Katana was developed to accommodate lighting of Spiderman's VFX shots. But again since these software was limited to the "cool-kids", there is a market gap. Hence, Clarrise comes into the scene lol. Then of course later on Houdini Solaris. huehue
  10. Ah. In essence, USD is really just a interoperability file format. So it exists beyond Solaris, which is a bit of a bold move to be honest. Solaris being married to USD on the get go. Much the same with with OBJ, FBX and Alembic. So if you are already fine with those format then there is probably no need to go USD. Just a brief progression, those formats was created because there was lacking in the formats before it. For example, OBJ was used simply to transfer 3D model but then it can't really carry weights/animation etc. That's why we have FBX. FBX works but it still has to interpret some assumptions (such as joint order rotations, hierarchy, how an object in DCC A is read in DCC B). That's why we have Alembic (developed by ILM and Sony). It's a cache format. So there's little interpretation and assumption. It's really just a point level animation. So USD developed by Pixar is like Alembic, where it is cached but its not limited to just files. It's also file management, hence medium-big production gets more mileage to it. "scene description" is not a new thing. Every DCC has its own scene description. Maya has it. C4D has it. Houdini has it. It's when you render stuff where the DCC collects the objects, materials, lights before calculting the pixels. I guess the clearest thing about USD vs Alembic is USD can contain another USD (i.e. part of "file management)". Alembic can't contain another alembic (AFAIK). The unique thing about this one is its goal to be "universal". And that's why people gravitate towards it especially that a 3D artist in a production environment doesn't just use one DCC. I mean any format wants to be universal for sure lol but since USD has been tried and tested on production (i.e. Pixar). So in essence, it could be any format really. It just so happens, Pixar was forefront to it. That aside, another reason to use it is relatively faster than all other formats. Faster to read, to process. Etc.
  11. @MJV Oh in the recent versions, from H19 onwards, you can actually "render" Karma outside LOPS. You can hop into the out context and search a karma node. I say that in quotes because that karma node is actually a lops node. Just package so that you don't have to go to the Solaris layout 🙂
  12. ah gotcha. i'm trying to relearn solaris/lops at the moment. my downtime at the moment so might as well do something productive. last time I explored lops, it was all "beta". i checked the recent webinars and its looking good. many utility nodes were added to simplify the process.
  13. Interesting. Just checked that Vray also offers Hydra delegate for Houdini Solaris.
  14. @Igor. That make sense. That's not a deal breaker for me as I can just use object merge for reference. Other gotchas apart form that?
  15. Hi, Just wondering. All this SOP to LOPS workflow on Solaris got me wondering. Why not just creating everything under SOP Create? It is still in the SOP context. Why bother working on the OBJ context at all if this is the case? At this point, Solaris seems like a "super" context where it can contain all other context.
  16. @Edward Suckling I'm assuming you are rendering in GPU. Because well, who in their right mind uses the Redshift CPU anyway (i.e. evil laugh) I'm not sure if you are aware but there is a nasty thing about Redshift GPU + Latest Nvidia Drivers. Most especially (also) with the high end RTX cards like 3090 or so. Basically, some sort of memory management rigamarole. Hence the the slow rendering. The ONLY fix is to downgrade the nvidia driver to 497.29 Game Ready driver. You can ask around in the redshift forum for confirmation. It has been an issue for over a year now or more (?). Still not resolved. You can rant on the redshift forum but its not going anywayre to be honest. that issue is being raised by a new user almost every week. so people are numb now. lol. P.S. It's a Redshift problem per se. It's more of NVIDIA driver relating to Redshift. So even if Redshift fixes something on their side, it still wouldn't work. NVIDIA has to addressed it in their driver explicitly for God knows when.
  17. There is a new cloth animation in the latest versions of C4D. But I haven't tried it as I'm on the older versions (i.e. perpetual and not subscription). And since they cut of perpetual, the amount of me using C4D is getting lesser by the day as I transitioin fully to Houdini lol.
  18. For complicated set-up, I'll probably use Houdini + Redshift since you can do so much (i.e. create your own "Fresnel"). In the example above, it is limited to 2 images. Theoretically, you can make 5 images. One for for Top, Bottom, Straight, Left and Right View.
  19. @Augustin Vermot Well it's not really specific to Redshift. It's just how to use masks based on viewing angle. In the example below, I use simply the default Fresnel. anya_lenticular.mp4
  20. AFAIK, that was how it was advertised. The last "MAX" version of Redshift perpetual was roughly around Aug-Sept 2023. I know of it for a fact since they last offered that on Aug-Sept 2021 (with an option to extend to 2023) , where I opted for a Redshift perpetual.
  21. RE: white circle around the cursor? White circle? What does that even look like? Screenshot maybe.
  22. RE: Generating hairs as geometry exacerbates the performance issues by tenfold As it should. This is the same case even if it was Blender Hair to C4D Hair. Or Houdini Hair to Blender Hair. To optimize it, you need a technical artist or you can learn Python yourself and API of both programs. This is too niche for a single DCC to provide a solution. Basically, 1) Export and Import Only Guides 2) You must know the parameters of both DCC Hair System to replicate it. 3) Store the setting (like in a json format) from the source DCC. Read the setting in a target DCC/ 4) Sure, not 1 to 1 conversion but you'll basically get there. Had an experience with interDCC conversion when I was asked to conver several 3DS Max Vray files to Maya Arnold files. That was worth it because it covers several files. Basically, the cost of writing a script offsets the hours of manually converting those files.
  23. @kkamin No worries. I'd be glad to just do it for free (although admittedly based only on my extra time). As others have done the same for answering my questions in forums 🙂
  24. Apart from Rigging and FX ( Biforst, Particles, Mash etc), you can probably get up and started over the weekend. Price aside, I of course prefer Maya over C4D. On any day. The only thing that will probably throw you off is how it is designed. Unlike in C4D, where you see everything in object manager, you'd probably check node editor every now and then, mostly in rigging. If you are stuck on something Maya, hit me a PM. I might be able to point you to a way. I have the same learning years in Maya and C4D. Roughly 6-7 years. So if you have questions, how to do something like in Maya much the same in C4D. I might be able to answer it, except for MASH (The Mograph) one. I don't deal with that since if its mograph, I use C4D before and now Houdini.
  25. Again, same argument as before. What's the incremental cost of maintaining both subscription and perpetual at the same time? Relatively minimal. Even Houdini offers both. Why not have both? Go figure. TBH. Couldn't care less. I already made transition long before so I'm not affected. Any release in R22 to R27. Doesn't make me droll since these features are already existing in my chosen DCC and even better. Except perhapse Python 3. I kinda need Python 3 for future proofing. lol. R21 is still in Python 2. My only regret is buying a personal copy of C4D R21 and on the next release it went subscription and remove the $500 ish upgrade for every release.
×
×
  • Create New...