Jump to content

keppn

Registered Member
  • Posts

    376
  • Joined

  • Last visited

  • Days Won

    18

Everything posted by keppn

  1. I'd like a shiny new, full-fat, highly-performant particle system that also enables fluid and granular solvers - preferably working in a unified system with cloth, pyro, etc. I'm tired of paying Insydium and waiting for substantial performance increases.😕
  2. Perhaps Insydium Taiao is suitable for the task? I admit, I have neither experience with Ivy generator nor with Taiao 😅 https://insydium.ltd/products/taiao/
  3. I can't speak for Octane at all, but if you're going to use Redshift, a split GPU setup really makes the difference.
  4. I would recommend a dedicated midrange GPU for Desktop and other Apps. Add to that highend GPU(s) that are reserved solely for Redshift rendering. Downside: Hard to cram everything into one case with good thermals. Power usage. Right now, C4D can't use the secondary GPU for simulation, but maybe that will be configurable in the future. Benefit: Your overall system stability and responsiveness will improve big time (and also circumvent that nasty nvidia VRAM-bug, that hampers rendertime by not freeing VRAM properly). Also, you can work and render at the same time, which for me is a big plus. When doing lookdev, you can have C4D viewport run on the desktop GPU with all the bells & whistles and simultaneously have a lightning-quick Redshift IPR running on the render-GPU. That works really nice.
  5. keppn

    Retain scale number

    From the perspective of a casual bystander, Chester made an incorrect assumption about Allenhanford's experience - but did so in a polite, helpful way. Chester, maybe it's better to keep answer's simply factual Allen, you could have just thanked Chester and corrected his assumption in an equally polite way. A question was asked, a helpful, correct answer was given, no need to fight. Please keep it civil folks, it makes this forum better for everyone.
  6. And: freedom! 🙂 flexiblecan.mp4
  7. Very broadly like this: Make some basic, low-poly shapes Put them in the volume mesher Adjust the radius to make the objects flow together. Adjust the low-poly shapes as well, when needed. This is a push/pull-step Add a smoothing filter to the volume stack and put a cache layer on top for performance Satisfied? Now Put the volume builder in a volume mesher Adjust the volume mesher so you have detail in the right places. This can be mid to high-poly Now put the volume mesher into a Remesher-object and set it to "ZRemesher". This will give you nice, clean topology. Here's a video of the new radius feature and a screenshot of the whole Volume Stack. Nice thing is: The whole setup is completely parametric, so you can still add detail at the volume level, for example. yeswecan.mp4 As you see, this can't compare with proper SDS-modelling, but it took me literally 60 seconds to achieve and will render nicely. It's a good alternative to have in the toolbox 🙂
  8. I'd like to add a pragmatic suggestion about choosing your workflow: If you need to deliver renders to a client, I'd go for the volume builder. You'll be finished faster and have time for other things. The quality may be only 90% of what could have been achieved by SDS-modelling, but I doubt people would notice. If you need to deliver open files, show off your skills. Do it in SDS-modelling, make the cleanest mesh possible and a tidy scene. Name and document everything, including speaking names for texture maps etc. People receiving your file will not only thank you, but remember you as a professional. So, make sure you leave your contact info in the scene file 😉 If it's for a personal project, ok, go for it and learn. I'd still model something appealing, so you have something nice for your portfolio. That oil can... you can model that in the best way possible, and I'm sure, the quad-evangelists of this forum will approve, but in the end it will be well-modeled atrocious can 🙂 Make something completely beautiful instead!
  9. No problem, and I didn't mean to flame. I don't know if you are in a role where you have any say in this. But sometimes you can sharpen your profile and gain clout by giving a clear judgement and making an alternative proposal. If that applies here, is up to you 🙂 (Also, I have begrudgingly modeled some garbage product designs in my time, no worries, hahaha 😉
  10. Or just propose a new design, because that thing sure is atrocious 😅
  11. My experience: Embergen is a wonderful tool. It's super-performant, has modern UI/UX and it produces fantastic results. While it's fun to use and play around, interoperability with other apps is quite cumbersome and error-prone. Pyro is surprisingly performant as well, but it's not a looker in the viewport like Embergen. Final results are in the same league, though. Pyros huuuge benefit is being natively in the scene. Interaction with other objects is easily achieved and iterated upon. The overall believability of your scene improves much faster this way. Personally, I was a huge Embergen-fan, but I haven't touched it anymore since Pyro blazed on stage 🙂
  12. I'm coming to the site mostly on mobile. The red banners are piling up, do I get one for every payment I made? 🙂
  13. CV-LayerComps has been updated a while ago. I've been happily using it in C4D 2023. It also allows you to set a layer, where newly generated objects go ("CV-ActiveLayer") https://www.cineversity.com/vidplaylist/cv_toolbox/cv_toolbox
  14. Sorry, I didn't mean to boast, but rather point out that the Asset Browser shouldn't lock up your system - which hopefully can be solved.
  15. Whatever works! 😄 Also, sometimes it's just the time to walk a new path, even if the first walk isn't the fastest. New tech tends to pay off tremendously down the line, so props to you! Thanks for sharing 🙂
  16. I must admit, I'm flabberghasted by the amount of preparation you're willing to do to achieve a good composite. The workflow seems overly labourious to me, but maybe I didn't grasp the requirements properly. Couldn't you... just model that room, for example?
  17. On my system, "Speedup Asset Search" runs as a background task. I can fully work while it's doing it's thing, even on heavy scenes.
  18. You mentioned " internal mograph cache" in your original Post. I guess, that's what's lost in your tests. Try to save the mograph cache externally. I guess the file size of the external cache will be exactly the size-difference in your test.
  19. Cool tech indeed! The single snippet of geometry you posted above looks quite unintuitive, though... is it hard to develop the form, so it patterns nicely?
  20. The log file should have useful information to help pinpoint the problem. I'd recommend to open a Thread on the official Redshift forum; Adrian and the community are very insightful and can probably read a solution from your provided log. https://redshift.maxon.net
  21. It does hold up even for close-ups, never tried that before, actually 🙂 (Added some texture for the Macro-feelings)
  22. Hmm, no problem here. You can use the attached file for reference. I guess there's something odd about your bump.jpg Laser.c4d
  23. I didn't have time to look at your file, but off the top of my head, you can try these steps: 1. Cache the emitter (Simulate -> Bake Particles) 2. Add a "MoGraph"-Cache-Tag to your cloner and bake the MoGraph Animation 3. Put everything into a "Connect"-Object. Disable "Weld points" 4. Now export the Connect-Object into an Alembic. Try leaving all Export options at the default for the first try. In case you can't solve it this way, there's a plugin called "Nitro Bake" It is a little awkward to buy, install and use, but it actually saved me from going mad two times 😄 https://nitro4d.com/product/nitrobake3/
×
×
  • Create New...