-
Posts
2,864 -
Joined
-
Last visited
-
Days Won
143
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by 3D-Pangel
-
Daniel, Thank you again for another well written and informative explanation. The video was also very interesting. The fact that you can save imported USD assets in the native format is good to know as it does provide all the flexibility between different applications it promises.. I hope it stays that way over time. Question: The video showed seamless integration between Ominverse and PF and how the USD format preserves metadata settings from PF....which I would assume also means the entire PF node trees as well. Is that true? You say that USD just stores the parameter values but when plants from OM were opened up in PF, the entire node tree appeared as well. That goes beyond USD preserving just standard parameter values (if I understand your explanation correctly). If so, I would have to assume that the USD format can preserve other node trees from other DCC apps as well but that the node trees are only carried along with USD file format with the appropriate metadata tag to identify it to the host application. So here is a question regarding shader node trees if what I am assuming is actually true. As Vue now supports Redshift, could a Redshift node tree developed in Vue be interpreted by C4D Redshift if it was imported into C4D as a USD file? I would assume the USD metadata would identify the RS node tree to any other application supporting RS as well. Next question: Does Vue offer the same level of integration with C4D? Can I select an imported Vue asset in C4D for editing back in Vue? That is what xStream was originally all about. If so, then can I edit the Vue developed Redshift shader graph in C4D on an imported Vue asset? Fascinating discussion so I apologize for all the questions -- particularly with respect to how much time/effort you put into providing the answers. Thanks again, Dave
-
So.......USD is the connective So....is it fair to say that USD is the connective tissue in the Omniverse world? In essence, FBX on steroids? Finally a file sharing platform that handles everything: animation, dynamics, materials, geometry, lighting, layers, etc..etc...etc? But access to USD assets is only managed through the servers in the Omniverse core and to see those assets on your PC you need an RTX based GPU? Now, I know you can set-up a "virtual" Omniverse drive on your PC, but I have to believe that "virtual" drive still has some connection to the central Nucleus database. Is that a fair summary? If so, no wonder it is a freemium application. Consider this question: Say someone creates the most amazing T-rex model that is expertly modeled, textured and rigged in Maya and exports it as a USD file. Could a C4D user then import that same USD file into C4D with all rigging, modeling and textures intact and fully functional? If so, could they then save it as a C4D file? I am thinking no. Once imported as a USD file it can only be saved as a USD file. If that is true, and more and more people collaborate in the Omniverse universe, ultimately all DCC assets and their IP exist as USD files on an nVidia server and can ONLY be used if you have an nVidia GPU. As the CGI industry grows to over $40 Billion USD by 2025, nVidia will not only have a lock on its content but also the hardware necessary to make use of it. ....and I thought Maxon subscriptions were draconian. Dave
-
Given that Maxon now releases C4D versions every 6 months, I have always wondered how much more difficult it must be for Maxon to keep up this pace. Each release has a certain level of over-head to produce regardless of whether they are capturing 6 months or 12 months' worth of effort. Remember that each release needs to be both internally and externally documented. Each release needs to go through internal testing and regression. When issues are found, there is less time to recover if they are holding to that schedule. So overall, the possibility and risk of bugs increases when the time between releases is cut in half. What also is working against them is that the program is becoming more interconnected with nodes. It stands to reason that as the plumbing gets more complicated, testing takes more effort. This may explain the role of a Test Automation Specialist. When you change a node, think of all the node combinations that need to be tested to insure everything works as expected. So I have to ask: Who really benefits from a release every 6 months if software quality starts to degrade or people start to lose faith in using it? Read the forums and you will see a general rise in comments over stability since R21. I have noticed that R23 has viewport glitches that I have never seen in any release since R9. I mean we all could have waited 3 more months for R25. But you can't do that otherwise the whole 6 month cycle takes a hit. Honestly, I think we all have much more to gain if we let the software mature a bit more in-house before a release if it means a return to rock-solid stability Dave
-
Can you load an older version of C4D released before you purchased that machine? As it ONLY occurs on the front view, I am inclined to think it is C4D R25 related. But if you could port that scene to FBX and then load it into R21 and it still occurs, then it may not be C4D related. My understanding is that Maxon removed OpenGL support from C4D with S22 -- now Apple also removed support for OpenGL for Macs around that time as well. I have no idea what Macs are using today but trying everything out with R21 and seeing if everything works would be an interesting data point. Just a thought. Dave
-
Interesting.....so, how do you use image files with that shader? Are they changing the input connection points to little diamonds from the round white dots we are familiar with? Am I missing something? I wonder if they will add a separate section for hair, volumetrics, etc. A section for fire/smoke etc. would be nice as there are still a few steps required to shade VDB files appropriately. Dave
-
NanoVDB (or real time volumetric rendering from nVidia) coupled with XP's port to GPU simulation should give Embergen a run for its money once everything has fully matured. But XP has a long way to go. Not sure how far Redshift has to go but I would imagine that they are making better progress than Insydium on anything involving GPU acceleration simply because they have been doing it longer. Dave
-
Interesting....a lot of positions. What does "m/f/d" mean? What I found interesting: Positions for developing real time GPU rendering for both C4D and Redshift and for expanding existing modeling tools. What I didn't see: Anything related to character animation positions. What is a sign of the times: Data analytics positions. Telemetry is going on each time you open up the program. How you use there software is being recorded and sent back to Maxon. Nothing unusual there. If that bothers you, then you can turn that off: Dave
-
There are actually some really good YouTube videos that explain how to use Xpresso for Railroad lights, flashing lights, etc which are "oscillation" type effects (in a manner of speaking). They at least introduce you to the relevant Xpresso nodes. I actually studied those tutorials to create a two gun anti-aircraft type of rig that oscillates the recoil back and forth between the left and right guns. What I have found is that Xpresso is like any other type of computer language...you need to keep using it or else you forget how. So, there is always a bit of recovery time to re-learn and pick it up all over again for simple things that Signal can do in literally a few clicks. That is why it was such a great addition to my toolkit for $69 USD. Even if GSG wanted to charge me $35 for each update, it would still be worthwhile to maintain for the convenience it offers. Sorry, but for $399/year - no way. True I get tutorials, texture collections, HDRI collection and other plugins for that amount, but there are far cheaper alternatives to their texture/HDRI collections and tutorials and all of their plugins together are not worth that much each year to maintain no matter how good they are. Not sure why GSG has to take a "Adopt our subscription program or you will get NOTHING! Submit and obey" type of attitude to on-going software maintenance. Even Maxon offers its C4D perpetual license holders an upgrade option outside of their subscription plans. Not sure why GSG cannot do the same. Dave
-
Chris is awesome. Extremely gifted programmer and educator. If you watch his pod casts, particularly those portions where he solves problems people throw at him in the chat window, you'll come to the conclusion that Chris has a Zen level mastery of C4D (IMHO). "Zen level" is a "stream of consciousness" mastery of the software where you just think it and your fingers go to work to create what is in your mind's eye without you being consciously aware of the time or steps required to get it done. That is nirvana for a creative like Chris (and...yes...I am a bit jealous). Honestly, I think explaining it slows him down -- so we are thankful he takes that time for us mortals. To step back a bit: One of the best parts of GSG product portfolio was Signal which I believe was developed by Chris while he was part of that group. Signal just made life easy and while it is true you could do most of what Signal automates for yourself with Xpresso (as I have done in some cases) but still, it was a really handy timesaver to have in your toolkit. I purchased it years ago before GSG Plus subscription program came out. Unfortunately, I can no longer get updates for Signal for future versions of C4D past R23 unless I spend $300/year for GSG's subscription program. Please Chris....if you are out there....do not go subscription. I am happy to pay for updates to perpetual licensed plugins. Also, not sure if there is a non-compete clause you signed with GSG before you left. But if you did not, then please consider the capabilities of Signal and fold those into either existing plugins or something all-together brand new. Just a hope, Dave
-
But it would make for an interesting murder mystery set in 2032. Someone is being blackmailed for a crime they did not commit. So the blackmailer is murdered which is then pinned on another person (the hero of the story) due to some faked surveillance videos. The hero traces what physical evidence is available back to the original person that was blackmailed only to find out that they have been dead all along and only appeared to be alive via all these AI fake zoom calls. So who is really behind it all? Twists, turns, and doppelgangers galore. Through in a love story and a cute dog, and you'll laugh! You'll cry! It will become a part of you. BTW: There is an AI app that car write this story for your right now. Dave
-
One thing no one has mentioned is the birds flying around outside at the start of the video. Pretty good animation actually especially as they are landing and kind of doing that reverse flapping type of move prior to touch down. Were they real time? Did they motion capture a bird? Or were they hand animated or an imported image file of real birds. Like the women in the shot, we will never know their true origins. But no matter how much you pick this apart, the fact remains that it is amazing. I don't think we are criticizing it because we are unimpressed. I think the criticisms come from a need to rationalize something that goes beyond what we are customed to understanding. I mean real time photo realistic humans is a level of technology that borders on magical. To quote Isaac Asimov: Any sufficiently advanced technology is indistinguishable from magic. Now, what is NOT being discussed are the ethical implications of all of this. AI programs are now replicating peoples voices, deep faking their likenesses. and now rendering their actions in real time. Is anyone other than me a little scared by where all this will go in 10 years? This takes identity theft to a whole new level. Think of how far real time rendering and AI has come in the last 10 years and project that forward another 10 years. Trust me....the next scary advancement will be when someone creates an AI program that turns a few candid photos of yourself into a real life replica of yourself for your video game avatar. People will flock to it because it will be so cool. But when that happens, the flood gates are open. Here is why: As we all move to a "hybrid work" model, our relationship to the outside world will be via Zoom calls: essentially digital images that can be manipulated. Imagine what hackers could do in the not so distant future: You get the point. We all worry about someone stealing our financial identities, but these advancements put our ownership of our own physical identities at risk as well. That is not only fundamentally dehumanizing but extremely frightening as well. Not really sure we should be excited about this. Dave
-
There is a great deal to like with the new UI....except the icons. Changing the icons is like changing all the letters of the alphabet to emoji's. The best way to overcome that learning curve is to completely customize the interface where you manually decide where every command needs to go. A lot of work with very little to gain because an icon is just an icon. Making the design of the icon to be sleeker, thinner, and more "modern" really adds no value other than appealing to user aesthetics. I understand the need to refine their resolution for 4K monitors and even tone down the color schemes as some may have eye-strain issues, but you could have done all that with the existing icon designs. Their true goal of an menu icon is to be instantly recognizable and quickly convey their function. Unfortunately, based on the complaints, I don't think that worked out to well.
-
The whole set up of this shot gives me problems, but I understand that it is for artistic dramatic affect to have the background spin around her. Nevertheless, the lighting on the hair seems wrong at this point. So when you factor in the room dynamics with her descending into a hole as the room spins around her head (that is the ceiling dome behind her now), then this has to be a two part composite as those elements could not be doing what they are doing if they all existed in the same 3D space. The mis-lighting on her hair further proves this point. So does that still make this "real time"? Maybe. But as I study this, it raises more questions with each viewing. Dave
-
DS Hanger final - front with Falcon.jpg
3D-Pangel commented on 3D-Pangel's gallery image in Final Renders
-
Interesting thread and so many different issues on the table: being software agnostic, Maxon criticism, pace of updates. One thing which I have not heard is fixing bugs. Is it fair to say that the Maxon's claim to fame of rock-solid stability no longer applies -- especially for perpetual license holders! Maxon's solution is to get a subscription if you want software quality as their commitment to quality for perpetual license holders ends after 3 months. That is a criminal (IMHO). Image if the Toyota brake fiasco a few years back was only fixed for people who drove leased cars. Now, there were a lot of bugs with R25 (which is amazing as other than the UI, really nothing major changed in that release). How did that happen and more importantly, did those who ran to get an R25 perpetual license as soon as it came out still able to get the latest bug fix release? I ask because I think it took 3 patches to settle that release down and the final patch was available more than 3 months after R25 was first released. If any perpetual license holder was denied the final patch because their support period ran out, I would love to hear that. I think we would all love to hear that. So, I think it is fair to levy criticism against Maxon for any decision that walks away from software quality and customer support for only one class of customer, namely perpetual license holders. Now is it fair to criticize Maxon on their decisions regarding updating one part of the program over the other? That is a subjective argument as it comes down to the individual user's skill level with the software and/or what they use it for. For example, lack of modeling tool developments may be more an issue for modelers than it is for texture/lighting artists who purchase their models. But quality and customer service should be something EVERYONE should care about and when that takes a back seat to a revenue model, then we should ALL have the right to complain and complain LOUDLY. R25 showed us that Maxon may have lost the recipe on quality control OR they are making well informed decisions to release software that they know is not quite ready for prime time just to keep to a schedule. As R25 was up against the all-important traditional September release date, I will assume the latter. As R26 is taking its time, my sense is that they learned from R25 which is a good thing. Glad to see subscription license holders get that level of consideration. What happens with R27 in September remains to be seen. Dave
-
As an aside - but somewhat related to this discussion, did I miss something concerning Z-Brush NOT being part of Maxon One? If you go the Maxon site, Z-Brush is listed as a Maxon product, but it has yet to be included in Maxon One. I would assume that this may have something to do with expiration of Z-Brush licenses, but I am not sure. But if it does, then should Maxon purchase Insydium, then my guess is that as Fused is a licensed product it also would not become part of Maxon One until those Fused licenses expire. If so, remember that Insydium allows you to purchase licenses many years in advance (just add one year onto another). So this may complicate the purchase of Insydium as Maxon is really about Maxon One content rather than C4D capability. Just a thought. Probably way off base. Dave
-
That's my sister! WTF? That explains why she never wanted to get together and only keep in touch via Zoom calls all these years - especially during the pandemic. ...it was all a lie. Dave But then again, am I real or an AI application flooding Core4D with posts. You decide. END OF LINE
-
When I watch Resonant Chamber, I get a headache thinking about Figuring out how to design a single musical instrument which incorporates 9 separate string instruments All the modeling and rigging required The work to break down each musical track for each separate instrument and developing the scripts required to auto-animate both the string plucking and string dampening to occur at the right moment. Note: I am not a musician so I am pretty sure that "string plucking" and "string dampening" are not the right terms...but you get the point. Do Midi files contain both tones and a data file that says "Now I am playing a C note and holding for x seconds? If not, then that is an additional extra step. And I am pretty sure I am missing about a dozen additional activities. So when you think about just how difficult standard Animusic videos are to create, Resonant Chamber just takes that all to a new level. Question: Is this stringed musical instrument used in other videos? I ask because the harps were never used. Overall....outstanding. Dave
-
From the album: Death Star Landing Bay
-
From the album: Death Star Landing Bay
-
One more photo with the Falcon....just because rendering them brings so much satisfaction:
-
-
Wow...that is grim. Not sure Germany will be pulled into a conventional war and should the unthinkable happen, the last thing I am going to be thinking about is S26. So I don't even want to ask you about 3D Coat's next release as the development team is based in Kyiv. Dave
-
That they bring back the old icons...but that is a foolish hope.
-
Okay....this may be the finished scene. I completely remodeled the front of the bay to make it look a bit cleaner and fitting better with the back wall. Note I also added some crates and cables to give it more of a "used" look. The glow needs work though but from a modeling perspective, I think I am done. Here is the back wall showing the crates better. Note also the landing "arrows". They appear in some works and tangentially in the original Star Wars hanger. You don't see them directly, just this white trapezoid across the floor in most of the movie, but in one overhead shot, you realize that they are arrows - presumably a guide for where to land. Blast doors all work as well so I think I am about done. Now I need to make a low rez proxy to incorporate into shots showing the landing bays from the outside of the Deathstar. Basically just a cube with the walls as a texture map rather than modeled. The only only modeling will be the "C" shaped floor to ceiling supports that extend out into the bay itself. Dave