-
Posts
705 -
Joined
-
Last visited
-
Days Won
3
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by Vizn
-
The arch-vis stuff is interesting, but far too unrealistic. The results are really just randomized things taken from the sample picture, processed through its database, and blended with the target render. Definitely not 'creative' or professional in any manner. For now. Using AI to bring in relevant assets (and place them where they should go) could be a useful jumpstart for dressing up scenes with relevant objects. Larger projects would benefit the most from that. Watching these, the only thing I would really be excited for AI to do would be to quickly generate different lighting conditions for a scene. Something that will generate night or day or any type of color-grading without me having to set all the lights, environment, and cameras manually. I want to more easily art direct my renders without having to make different versions, or takes.
-
So they found the "Einstein"-Tile (as in "one stone")...
Vizn replied to kweso's topic in Discussions
I thought the same thing. It might be useful as a way to re-mix a standard seamless texture, so the artist doesn't need to create the texture in that specific shape. It would be crazy to paint a seamless texture in that shape. So the math should be used underneath (behind the scenes) to make the standard texture not appear to repeat. Basically, what the "seamless" option from C4D material tag attempts to do, I think. Not sure if that option is still in the newer versions, but I digress. This new shape would not be useful for anything that needs a repeated structure, like typical floor tiles. Random / Noise patterns may benefit from it though. -
Interesting. Still not enough tho. Output settings? Example of the candy's mesh? Cloner settings? Render settings? Example of the viewports and object list to show how you've set this up? Can't optimize without this type of info, unfortunately. Personally, when I see something like this, I try to figure out if it can be done with a few small patches that I could then stitch together for the final image. If deadline is too close, then just use cloud rendering.
-
When the render engine requires more physical memory than the system has, it starts to use the hard drive as 'virtual memory', which is way slower than RAM, so the render slows way down. A full day (24 hours?) to render a 10,000px static image seems excessively high. I can almost guarantee that you are running out of RAM during render. Look at optimizing your scene by collapsing generators and deformers, and reducing mesh densities (if possible). Another consideration to reduce that load is to render the image in regions, or strips, that get stitched together to make the full image. Not sure if the built-in renderers have specific settings for this, or not. However, most 3rd party renderers do. Some insight into how your scene is constructed (like screen shots that show mesh densities, and/or show us how your object hierarchy is put together) will enable more specific optimization suggestions.
-
Ugh. Sorry you gotta deal with that. Relinking is something that should always work as expected. Maybe the old "copy-and-paste-all-to-new-file" workaround might get the relink to stick. Maybe relinking using a previous version of C4D? Beyond this, sounds like a support ticket is in your future!
-
Process for Octane:
-
Might be a legacy thing for old compositing programs (versions) that maybe didn't or don't handle the ProRes format, or at least don't handle any alpha within MOV and would thus require the separate file. As long as you don't see it rendering each frame twice (for the extra alpha), you shouldn't be concerned. Normally, all of the info is processed in one rendering session, then any explicit multi-passes are derived from that and simply saved out. Maybe if the extra file was super huge it could impose some delay. Otherwise, just chalk it up to legacy compatibility and delete it!
-
What render engine? There is likely a function that is part of the render engine package to manage assets. For instance Redshift has the Redshift Asset Manager under the Redshift menu.
-
One small detail I forgot to mention: The label image should be saved in landscape orientation. That is, the long dimension should be left to right in order to match the cylindrical projection orientation in Cinema.
-
First, make sure the label image is the correct ratio of height vs width to fit the polygon selection. In case you didn't know, use the formula: Diameter x π (pi) to get the length around the roll and simply look at the width of the selected polys for the width. If your label image matches the ratio, all you need to do is set the material tag Projection to Cylindrical, activate Texture Mode & Enable Axis to see the orientation of the cylindrical projection cage. Rotate it to match the orientation of the model, then right-click the material tag and select 'Fit to Object'. Since the label polys are not disconnected, the cage will fit the entire model, but then you just use the Scale tool to squish it to fit the label polys. Disable 'Texture Mode' and 'Enable Axis'. Let me know if you need some visual guides for this.
-
Import FBX or Alembic without mesh triangulation. How?
Vizn replied to Mantas Kavaliauskas's topic in Cinema 4D
Several tips and info re: FBX export/import in this old thread I found: https://forums.autodesk.com/t5/3ds-max-modeling/prevent-fbx-export-from-triangulating/td-p/5205045 -
You could try placing the RS light into a Null. Edit: You could also try making an Instance of the RS light, then using the Instance in the Cloner, instead of the original. Not a guarantee. Just something else to try.
-
If predicting the future were easy, we would all be rich and prosperous! The current slew of talked about AI systems are not creative in any way. It takes a human eye to look at AI art and say "that's pretty creative", but the AI itself is not sentient, thus it is just a complex algorithm, doing what it has been trained to do. And it will do nothing if left alone. Give it some interactive sliders, option dialogue boxes, and input fields and it could be the next big Photoshop filter plugin for humans to feel creative! The possibility for bad actors to unleash crippling AI algorithms on public systems is no worse than the attacks that those systems contend with now. The same possibility exists that others will develop specialized anti-AI tools to protect and defend against those advances. It's not as if all innovation in other systems will go out the window in favor of some kind of master system. Not with this economy! An important thing to remember is to not fall into overthinking about all of the negatives that these new things spark in our imagination. IMO, which is purely based on my observations of humanity during my lifetime, there will be lots of both good and bad from advanced computer systems, which includes AI. How good? How bad? Who will win? Who will lose? Predicting the future is not easy! 😉
-
I completely understand this. I mentioned the file size really just to illustrate how it matches up to the memory report. However, I am still very curious why the actual RAM usage during rendering is 10x that of the compiled rendering data when utilizing CPU buckets.
-
Interesting. Thank you for this insight. Since everything still works as expected while I am working in the file, I won't focus on the memory report, and just chalk it up to old coding limits.
-
Working in R18, I have this arch-vis file that is ~820MB on disk. It's only half-finished, but already the scene info looks like this.. What does this memory report even mean? It's about double the file size, but why is it negative? Does this indicate an underlying issue? I think the poly numbers in parentheses are triangles? Is 35.6 million tris a lot for a system with 64GB RAM? I've hit CPU rendering snags involving maxed out RAM. This scene writes out a 4GB compiled data file for distributed rendering, yet the system uses 50-60GB of RAM while rendering. Is it really just all the generator and deformer overhead? As a side curiosity, would this even render on a GPU? Which part of all this does the "must fit into the gpu memory" rule refer to? The compiled size (4GB) or the rendering utilization size (50-60GB)?
-
Right, but he needs that 4000px image to be natively 4000px. If he just increases the pixel dimensions in Photoshop, all that is happening is making the existing low-res pixels bigger, meaning not proper high-res.
-
Not a lot of info to go on. But I would start troubleshooting by monitoring the system resources while rendering progresses. Particularly how the memory is utilized. If memory usage ticks up close to max, it could slow things down due to memory swapping. Setting the virtual memory system to use all drives, or upping the swap file size (if only one drive is available) might help.
-
Not sure what merging materials is. But I have replaced a material on dozens of objects, spread out through a giant object list, simply by right-clicking the material in the Material Manager and selecting "Select Texture Tags/Objects" then dragging the new material to replace it into the proper field on the Attribute Manager.
-
I don't do hair, but I know members here do. Possibly the best thing I have learned from these forums is that if you put more effort into showing / explaining your issue (specific hairstyle) then you will have a better time to learn techniques, Rather than just desire easy plug and play solutions. The active members here truly wish to help, but that is dependent on the approach. I understand time is money, so inject that into your need for help by laying vulnerable your desires.
-
Check the Direction tab of the Mirror tool and make sure the Origin and Axis is set properly . It was likely set to Rotation initially, so it rotated to mirror instead of flipping.
-
Notice that your spline in the blank scene is Bezier, but the spline in main scene is Linear, even tho it has soft interpolation. I don't know why it's doing that, but just change it to Bezier and it should be fine.