-
Posts
2,864 -
Joined
-
Last visited
-
Days Won
143
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by 3D-Pangel
-
Maxon's Spring 2022 Launch Event | Live Stream | S26 Announcement
3D-Pangel replied to HappyPolygon's topic in News
Not following you. What would you like them to have done instead with Redshift then make it more generally available to everyone? And by everyone, I mean both perpetual and subscription license holders given that RS is no longer offered as a perpetual license. Not a big fan of paying $264/year to keep RS. I use C4D as a hobby and can easily foresee maybe needing GPU rendering once a year to do an animation. In the meantime, I can use Redshift CPU to do all the lighting, materials, etc. and then when it finally comes time to do a longer format render, I can drop $45 for a monthly license. Honestly, the only two things I want out of Maxon One (to date) is C4D and RS. This is still a win. Dave -
Maxon's Spring 2022 Launch Event | Live Stream | S26 Announcement
3D-Pangel replied to HappyPolygon's topic in News
R26 is very impressive across the board. People are getting hung up on Redshift CPU not being Redshift GPU. What everyone is overlooking is all that Redshift brings to improving, lights, volumes and atmospheres that go way beyond what Physical Render can do. With RS, you can now work with VDB files from XP and shade the temperature, density channels natively in C4D. Plus, you have all this capability for ray path optimization that you never had before. Plus, while GPU capability is now subscription only, Redshift CPU is now perpetual. To me that is a big win. If I need GPU capability, I can purchase a one-month subscription to RS GPU but all the scene building can still be done ahead of time with my perpetual licensed CPU version. Dave As an R23 perpetual license holder holding off on upgrading to R25 until R26 was announced, I need to call Maxon and discuss perpetual upgrade options as these are improvements I want to enjoy with R27. -
Brilliant....as usual. One favor: Could you create that same scene again but make n-gons visible? Or....show an example of what constitutes a set of "island" n-gons? I want to know what to look for so that I can execute your fix properly (or hopefully avoid it from the beginning). Thanks, Dave
-
Mr. Jones is my favorite C4D artist. His opening animations at the beginning of each of these challenges just blow me away. Not sure if anyone remembers these, but these montages bring me back to the good old days of the "Mind's Eye" videos. They came out in the 90's and it was at a time when really good CG animation was extremely rare. So those videos came out every two years and they were just pure eye-candy. Getting my hands on one was always a treat. Well, Mr. Jones is the new distributor of mind-bending eye-candy and I eagerly look forward to them as much (and even more so) than the MInd's Eye videos. My only regret is that some of the worlds being shown are just so inviting you want to spend more time in them. 4 to 5 seconds per clip is just not enough time to take it all in. Kind of like someone pulling the ice-cream cone out of your hand after you take the first lick. You feel the loss as each one passes by. Maybe that is why they call him Pwnisher? He taunts us with eye-candy. 😆 Well, these challenges are now becoming an expected staple in my viewing diet, and I eagerly look forward to the next dose of many delicious - but tiny - servings. Dave
-
Chaos Corona 8 for 3ds Max and Cinema 4D Now Available
3D-Pangel replied to Heidi Lowell's topic in News
Interesting.....though I have to say that this is the first product announcement I have seen on the forum without any links back to the main web-site to either learn what is new or download a demo. So allow me: What new link: What's new | Chaos Corona (corona-renderer.com) Download link: Chaos Corona 8 for 3ds Max and Cinema 4D available now | Chaos Corona (corona-renderer.com) Dave -
I thought Mike was just 3D printing his model for kicks and giggles -- maybe a nice paper weight or a new desk lamp. You know...simple stuff that us mere mortals might dream about doing with our 3D creations but never actually having the knowledge, time or equipment to accomplish. Well....leave it up to Mike to take this one step further than any of us would ever dream of doing: Yes...he is building an animatronic puppet. Now, I am a mechanical engineer and I am absolutely impressed by the quality of the design and how nicely the servos fit into their respective slots that he had to design into the 3D printed model. This took some planning and some solid design skill so bravo! Honestly, would someone from Weta or ILM just hire Mike already!!! He is a quadruple threat: he can design, engineer, animate, and model his creations into various realities: both physical and virtual. A rare package of gifts indeed! Dave
-
There is a plugin called Image2Plane that does this to a certain extent. You can point it to a folder of images and it automatically maps each image in that folder to a plane sized with the same aspect ration as the original image. I would image that is the majority of the work because after that is no more than placing all those planes under a MoGraph random effector. Hopefully this helps. Check out the link provided to see if that works for you. Dave
-
Thank you for fully outlining where AI could go in its ability to understand human perceptions of what makes good "art". For the purposes of my argument, namely is it truly "your art" when the thinking machine is making all the artistic choices for you, I think you helped make my point.
-
If I may, I think everyone is viewing this from the perspective of what AI can do today and NOT from the perspective of what this type of AI could become in the future. So let's extrapolate. And for those who may scoff at this extrapolation, then recall all those people who scoffed at the possibility of digital humans in the 90's when CG was in its silver ball and checkered floor phase. Okay...so let's imagine the future. The year is 2035 and you decide to spend a few moments to tune out the real world and activate your iVatar: a neural implant inserted into the back of your neck enabling instantaneous connection with a virtual world existing on a truely global platform. You are represented in this world by your lifelike avatar and can receive visual, audio, and haptic feedback from this virtual world directly into your cerebellum. All you need is to close your eyes and you will fully perceive this new reality as if you were really there. Things get too real and you are back in the real world once you open your eyes again. BTW: iVatar was created by nVidia after they acquired Apple in 2029 and iVatar sales have been so successful, the last iPhone produced was in 2033 with the iPhone 23. In this virtual world, you stand at an oil easel in the middle of the Congo river delta, power up DALL-E World V9.7 and think or say "give me an impressionistic painting of water lilies in the style of Claude Monet". DALL-E immediately creates this: Now....is this art that you created? The example of people complaining about the advent of photography replacing painting is not valid. There are artistic decisions still being made with photography both at the time the photo was taken (framing, focus, exposure) and more as the image was being photo-chemically developed (dodging, burning, cropping, etc.). Those decisions did NOT end when photography went digital: in fact they got more complex with color grading, compositing, filters, etc. As with painting, there was still a technique that needed to be mastered in that transition to photography and the truly successful photographers had their own "style". That is why Ansel Adam's photos are so iconic. You recognize his style immediately. That style is what still made him an artist. Now if AI in DALL-E never progresses past the point of being a hands-free version of Photoshop, then maybe the user still has an opportunity to show his/her artistic technique and create their own style. For example you tell DALL-E to brighten this section, blur that section, shift the color balance on that tree, etc. then those are still artistic decisions owned by the "artist" in the creation of their own "style". But this NOT where AI is going. And whether it be DALL-E or some other software endeavor, DALL-E has shown the capability for AI to one day bring us to a place we don't want to go. Here is why: The whole point of AI is to replicate the style of others. That is what AI is all about: iterative learning about what was created in real life so that it can copy it exactly and possibly improve upon it. This is where the iterative learning capability of AI will be playing. You don't need to build an AI engine if the user is supposed to make all the artistic decisions. If all you need to do is say "give me XYZ doing ABC in the style of Ansel Adams" then you are not creating your own art. You are copying someone else's style without effort. This does NOT make you the artist in the same way that Xeroxing a painting does not make you an artist. Dave
-
So this really brings into question some questions regarding "art" and more importantly the value we place on "art". I would hate to think that at some point in the not to distant future, you enter the text "expressionistic image of water lilies" and you get something that Claude Monet could have created. If that is all the effort that is required with AI, then (IMHO) digitally created art is no longer special. It instantly becomes a commodity when it is created with ease and at great abundance. It has no value, either monetarily or within the eye of the viewer. So much for NFT's. Why go to all that trouble to insure the originality of your digital artwork when how it was created is open to debate. Did the artist labor over every polygon, texture, light position and render setting or did they use AI? With those doubts, would you ever pay anything for digital art or would you instantly view it as a mechanism for scam artists to make a quick buck and stay away. For example, had Beeple waited 10 more years before cashing in on his "one-piece-of-art" everyday without fail, no one would have paid $69 Million for a body of work you could produce by simply typing a bunch of phrases. Anyone can type a phrase into an AI engine every day. Once you begin to question the effort required to create something, the creation loses value pretty quickly. The implications for the digital content creation industry are also significant. Do you need as many concept artists? Storyboard artists? Look-development teams? Content creation companies will love it because of the increase in output and how it speeds up the whole brainstorming processes...and they can get all those benefits with less artists. I don't know.....it just feels that every week we see a new wonder from the world of AI. Some are amazed, impressed and/or excited by what this technology brings. As for me, I also see its potential to de-value and de-humanize our contributions. Dave
-
Okay.....after a short pause, I am now working on modeling the landing bay as seen from the outside of the Death Star. Basically using this matte painting from Return of the Jedi as inspiration: Given the potentially monstrous poly count, the landing bays pictured here need to be converted to a very low poly version....and come in 3 different sizes. The fully modeled landing bay I just completed weighed in at close to 606,000 polygons and 75.4 Mb. The low polygon version is shown below: Again, this is ONLY meant to be seen at a distance and have just enough interior geometry so that there is a natural parallax shift going on as the camera moves past. Essentially, a box with the textures made from the interior renders of the full size hanger bay. A low poly version of the column supports were placed on the side over the rendered texture of the side walls to take the curse off it being a flat image. All images have luminance so that no interior lighting is required. Still a few more things to add/improve but right now it weighs in at 343 polygons and 0.178 Mb. A good start. Dave
-
NVIDIA Research Turns 2D Photos Into 3D Scenes in the Blink of an AI
3D-Pangel replied to HappyPolygon's topic in News
Wasn't BARF created by Tony Stark in Iron Man 3 and the tech behind Mysterio's powers in Spiderman Far From Home? Is life imitating art in more ways than one here? Dave -
Thank you! But honestly, I can't remember ever meeting you! Dave 😆
-
Honestlhy, I love the lighting of the people. It really creates a sense of mystery and wonder. It also takes the curse away from the uncanny valley. Seeing them in better light will not really add anything to the entire animation but will hurt it in the long run if they are shown to be squarely at the bottom of the uncanny valley. Relative to the fence.....I agree that as there is a fence in the shot of the people and then a cut-away to the fence around the launch pad, you get the impression that they are the same fence. So it is an editing issue. if there was an interim wide shot that showed just how far away the spectators were from the launch pad (and you could see the fence around the spectators in that shot), you would get the correct realization that they are in fact two fences. That may work, but let's think about why the fence around the spectators is even necessary. The purpose of the fence in the second shot of the people was to provide some depth to the tracking shot of the father playing with his daughter on his shoulders. Otherwise the tracking shot will not read as well. You could replace the fence with small bushes, or change the type of fence around the spectators....maybe a post and beam fence the type you would find in a farmer's field. I also missed the shooting meteor. Nice touch. Yes....it does have a CE3K vibe to it. CE3K is one of my all time favorite movies. The opening sequence when they find the fighter planes in the desert is just a masterpiece of editing, music, lighting, camera placement and directing and (IMHO) nothing has close to creating a true sense of wonder and mystery in an opening shot of a movie. It perfectly sets the stage for what is to come: For a movie that came out 45 years ago, that scene holds up extremely well. The broken stone fence in the this shot from CE3K may solve your fence issues in your launch animation maybe? Replicating it in "T-Minus 90" would be a nice homage to CE3K as I do see both having that same sense of wonder and mystery. Dave BTW: Bit of trivia. When CE3K starts, the music builds to a dark screen until at its peak there is a visual "pop" to show the desert. To get that pop, the editor inserted one completely white frame between the darkness and the desert scene. In combination with the music, I remember how everyone jumped in the movie theater back in 1977. A pretty interesting technique to try in the future!
-
That was soooooo satisfying! And to think....he did all that in low poly mode: 0 😀 Dave
-
Thank you for all the help and suggestions. I always learn something from Bezo's Xpresso rigs. The light was on a folding wing that was mirrored. So the right wing was modeled and the left wing was mirrored. As C4D symmetry only handles geometry, an instance of the light on the right wing was moved to where it belong on the mirrored left wing. As the Xpresso controls folded the right wing, the left wing light needed to move with an opposite rotation to stay where it belonged on that mirrored left wing. It was actually easier to fix than I thought it would be. I placed the left wing light under a null and positioned that null to the pivot point for the mirrored left wing. That null was then placed into the Xpresso set up for folding the right wing and I copied the range mapper for that right wing, connected it to that null and just inverted their output values and it all worked. Now, if C4D had a proper symmetry tool (where have I heard that before) none of this would have been necessary but it was not that difficult a problem to overcome. Thanks again for everyone's help. Dave
-
I have a rather deep object hierarchy of objects all under a parent which is then placed under a symmetry object in R23. Everything in that hierarchy gets appropriately mirrored with the exception of the lights. I could just manually duplicate the lights but that part of that model is under Xpresso controls and the duplicated lights move in the opposite direction to the model when I place them under the same parent null. I am sure I could figure it out, but before I take on the Xpresso rig and work out the appropriate rotation settings in the range mapper, I thought I would ask if there was a simple fix for the symmetry object to also mirror the lights as well. Thanks, Dave
-
GPU Rendering and Falling GPU Prices (1 3090 vs 2 3080s)
3D-Pangel replied to Falstaff's topic in Discussions
My question was whether 1+1 = 2 in Octane. I know that 3rd party render engines will use both GPU's but it is more like 1+1 < 2 rather than 2. NVLink with two of the exact same compatible cards will yield 1+1 = 2. So Octane is a bit more efficient than RS in this respect. Dave -
GPU Rendering and Falling GPU Prices (1 3090 vs 2 3080s)
3D-Pangel replied to Falstaff's topic in Discussions
So can Octane make use of two different cards where the available GPU processing power to the renderer is simply a sum of the two different cards capabilities? I know that with Redshift, they do say that you can use two cards, but (even if both are the same card), the available GPU power to Redshift is not a straight summation of the two cards capability. I always thought that the only way to make two cards look like to one to any render engine was that they both had to be the exact same NVidia card, the cards needed to be NVLink compatible and you need the NVLink hardware. But based on you post about Octane, I am wondering if this is all just a Redshift limitation? Dave -
Okay...so they did change the iconography for the connection points from white circles to diamonds. That is what was throwing me. I guess just another adjustment in an on-going effort to change the UI. Dave
-
I understand that. But all things being equal, the main point is that the A5000 (in my mind) is now comparing favorably to the 3090Ti. I need to see the rendering specs though between the two to fully understand what I am loosing with 30% less CUDA cores. The A6000 is just a luxury card. When you purchase it, you have to go into it with that mindset and the presumed peace of mind that you will never outgrow 48Gb because it is really hard to cost justify it. But with that said, I have asked in this forum what you could gain by having 48Gb or VRAM. What can 48Gb of VRAM get you over 24Gb? So far, no one has presented a strong case for 48Gb. Dave
-
Very interesting discussion....especially concerning the value of CUDA core count to Redshift. As Redshift RT roles out, then what also becomes more attractive (despite its ridiculously high cost) is the A6000. At 48 Gb of memory and 10,752 CUDA cores (same as the 3090Ti), it only uses 300W of power. So, on paper, it is the better choice if you are worried about power consumption but still want to reap all the benefits of rendering speed from the 3090Ti. Unfortunately, it costs $3400 more than the 3090Ti. So the best yearly contract price I can get for power is $0.1149/KwH. As the 3090Ti uses 150W more than the A6000, the payback period occurs after 19,300 hours. If you are working 40 hour weeks, 50 weeks a year that comes to 9.65 years. Maybe the 3090Ti is the better choice after all over the A6000. Now, relative to the A5000, while it has 30% less CUDA cores than the 3090Ti, the power consumption payback period for that extra $600 in cost for the A5000 (but using 220 less watts per hour) is after 2373 hours of use or 14 months. Hmmmm......A5000 is looking pretty good. Dave
-
If you mean sending an exported USD mesh that you imported into C4D back to VUE/PF, then no. This is an Omniverse-only feature, because it would require a plugin similar to the extension that we offer for OV. But since we already have plugins for a few apps, I will ask internally if this functionality from OV could be ported over. Finally, there's also MaterialX as a second material option. MaterialX is one of several shading languages (with MDL and OSL being the other two popular ones which I think can also be derived from a MaterialX graph) which are gaining more and more support from multiple apps and render engines. With MaterialX, you can code the actual material nodes in a material graph, for example a custom noise node. When you then load an asset with a MaterialX material in any supported application, not only is the node graph recreated, but also the nodes themselves. Each application / render engine would then show the exact same custom noise node with the same controls and parameters. There would no longer be a Redshift noise node and a Cycles noise node and an Octane noise node. No more proprietary materials. Everything would be standardized. Because not all nodes all applications have can be easily recreated through MaterialX libraries, MaterialX is usually its own type of material and you couldn't e.g. create a Redshift material with MaterialX nodes in there. You would have to decide whether you want to go with a proprietary Redshift material with all capabilities, but readable only by Redshift, or with a MaterialX material with not quite as many options, but readable by any app that supported MaterialX. USD supports MaterialX, so the deeper the MaterialX integration becomes across applications and render engines, the easier and more seamless the material exchange will become. Wow! Just wow. For a bald guy, you just blew my hair back. My last question (quoted above) was purely relative to Vue and C4D without any relation to USD or Material X. They were all based on the assumption that if node tree relationships were being passed from Vue to USD then are they also passed from Vue to C4D? You clearly explained that was not the case (and I applaud e-on's cleverness for making it look that way). But I did appreciate the explanation into Material X. It does make you wonder where all this development on an industry standardized pipeline is going and why? You may wonder "Why is Dave asking why? Isn't it obvious? Complete inter-operability of various DCC applications". Well, then how do the people developing MaterialX, USD and whatever comes next make money? Why are they doing it? Honestly, I would not be surprised if there is a complete inversion in the industry. The DCC's applications all become free.....but you pay for an annual subscription to the conversion licenses to move those assets (including the ability to render the asset) to other applications. Every MaterialX library has an annual fee. Every USD plugin has an annual fee. The host application (like e-on or Maxon) offers the programs for free but makes money from these licenses. Part of this will also be the move to no longer offer native renderers in these host applications. You can only render that asset in a stand alone renderer (which is now free) but you need to pay the annual subscription fee to get it there. Offering the host application at a very low price creates demand for the conversion licenses. It is like the HP model for printers and ink. The printers are cheap but the ink is where they get the profits. Pure speculation on my part...and admittedly, somewhat cynical. I guess I no longer trust businesses these days when it comes to the development of what appears to be a wonderful future: "Wow! Someday I will be able to move my Redshift developed scene in Vue to render in C4D using Octane!" Yeah....nice dream. But trust me: Someone has already worked out how to make money from this effort from EVERYONE otherwise they would not be working on it. Dave
-
The RTX A5000 compares favorably with the 3090Ti. Both have the same nVidia compute capability score of 8.6 and both have 24Gb of memory. The 3090Ti has more Cuda cores (10752 vs. 8192) but the A5000 uses only 230W vs the 450W of the 3090Ti. The A5000 is more expensive though....$2600 vs $2000. So, how do the number of Cuda cores compare to Compute Capability score? Both have the same score but the 3090Ti has 31% more cores? I agree that the higher power consumption bothers me. 850W of system power is like running a microwave oven. Not sure I want to face the electric bill of running the equivalent of a microwave oven for 50 hours rendering an animation. Not at today's energy prices. Dave