-
Posts
2,877 -
Joined
-
Last visited
-
Days Won
147
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by 3D-Pangel
-
Interesting comments on the finished images as it implies that the creation of the VDB files is the predominant determinant in the final image and therefore if it doesn't look good it must be how the VDB file was created. I always thought that the purpose of the simulation engine is to get the fluid motion correct. The predominant channels that impact rendering are density and temperature. So, if the simulation engine is off in its calculation of density and temperature for each voxel element then not only would you get funny looking renders but funny looking fluid motion. But if the fluid motion looks correct, then how the image is rendered is really up to the artist manipulating the shaders for both smoke (using the density channel) and black body illumination (using the temperature channel) in the VDB dataset that is created by the simulation engine. Viewport rendering really doesn't mean that much to me other than it gives me some representation of the amount of fire vs. smoke being created because it will all be tweaked after caching that VDB file and passing it to the render engine. So all the occlusion effects, self-illumination, etc that you want in your final image is determined there and then and not before. Is that a naive view or am I missing something? Dave
-
This video made me want to learn more about Affinity because very little was said other than something great is coming. Well, you go the website and it is down because they are "working on something big". Well, not sure why they had to take the old site down but all I know is that if you are creating this much of a mystery....you better deliver. History has shown that rarely does reality live up to the hype and this just feels like hype to me.....but I could be cynical. Always happy to be pleasantly surprised --- so here's to hoping. Dave
-
I think there is something sinister about AI. Its engine goes beyond what is seen and into the unseen areas of our psyche with the art it creates. How does it do that? I have no idea but check out this example and you tell me if it is not scary: I entered the text "Self Portrait of what Cerbera is thinking while he models"... ....and I got this disturbing result...... Nothing but triangles!!!! And in a pleasant and artistic arrangement as well. NOOOOOOOOOOOOOOOOO!!!!!!!! That shook me to my very core!!! Dave
-
Thank you for the link. But I have to say, the download process and additional dependencies required gave me some concerns as to whether all those executables exist ONLY when Blender runs or are they going to create conflicts/registry issues with other programs on my computer. I've been bitten before on these types of things so the old adage "once bitten, twice shy" applies. Dave
-
My ideal AI application for textures were that you provide it one or more separate base texture(s) and it generates a non-repeating, perfectly tiled texture that goes until infinity. And from that base image it creates the appropriate masks for reflectance, AO, bump, normal, height, specular, luminance and alpha. Luxology actually had something to do this long ago called ImageSynth (not sure if it created all the channel textures other than the color channel) but it was too unstable and therefore discontinued. Of course, that was long before AI...so maybe it can be revived. Dave
-
I think this is a port to open-source AI programs like DALL-E or Jasper where you type a phrase like "Ferrari on the moon" and it generates the appropriate image based on the analysis of over 5 billion images. Those machine learning programs essentially work with text rather than 3D models but the port to 3D was a rather clever thing to do particularly when you understand how these AI "black box" engines were built....which I think is called Stable Diffusion. Again, not sure. What they did to program the "black box" of the AI engine was to take each of the 5 billion images and add random noise at increasing amounts to each along with a structured text description and then train the black box to de-noise that image. The black box engine then worked to denoise that image in a random way but based on a comparison between the denoised image to the original image the program would then ask: "am I getting better or worse?" Based on that answer, it would re-iterate again, and again, and again until it got an exact match. They went through this routine for each of the 5 billion images and each time the AI engine was successful, its "machine learning" algorithm was improved. Finally, when that was done, they added an image which was pure 100% noise and based on the text asked the engine to denoise it. Well, as there was no real starting image but rather just random noise, the AI engine had to use the algorithm it created from all the previous 5 billion runs and create an image purely based on the text alone. And that is how you can create art from text. So, getting back to this plugin, I wonder if there are two things going on: The simple low poly rendering creates an initial "noisy image" to the AI engine as a starting point. You still have to enter text to the program to get things started. What I am NOT sure of is whether any of the 3D scene file information is read (like object position, light direction, etc.) to help in the creation of the final image or whether it all comes from that crude render you created in Step 1. Now, you could add to the text how you want the image to look. For example, you could have entered "in the style of Monet" and instead of a photoreal Jaguar on the moon you would have gotten a French impressionistic painting of a Jaguar on the moon. So, this could do some pretty neat NPR animations as well. Again, I am over-simplifying how all this really works, but based on my understanding of how these AI engines were built, I think that is what is going on. Dave
-
Makes you wonder what he is going to do for an encore. 😀 No pressure!!! 😆 Dave
-
This is not a new problem having been written about in Pierre Grage's 2014 book "Inside VFX" I read this book and it basically touches on the same issues raised in that video. Now, what is surprising is that is appears that nothing has changed in over 8 years. The VFX industry is still in crises. So why has nothing changed or is this really and 8-year-old crises? VFX houses won't keep behaving in a way that burns out their team while losing money. That is not a sustainable model and certainly not a model that will keep people at their computers for 8 years working long hours for little pay. If the people who run VFX houses are all cold, heartless task masters who are happy to work their employees to death for little pay then the word gets out about that environment/company and the pool of talent dries up. People are just not going to pursue that career if there are no rewards. Any employer knows how much a high attrition rate negatively impacts the ability of their business to operate. If you can't recruit talent, you make changes. Again, I am looking at this from an 8-year perspective. No multi-billion-dollar industry can be in crises for 8 years. Problems either get fixed or the industry collapses. Well, the industry hasn't collapsed. Now, I am not dismissing the concern that is being raised and my heart and support does go out to the long hours of work that VFX artists put in for very little reward. But I have to ask this: If this is an 8-year (or more) crises for the reasons stated in that video, then what keeps you in that industry? Dave P.S. From my own personal experience, as a pre-college teen deciding on a career in the late 1970's, I was passionate about VFX in the pre-CGI days. Motion control, stop motion animation and optical printers is what excited me. But then I learned that only 2% of those in the film industry made over $15,000/year at that time ($49,000 in today's dollars). Sorry....those were not good odds and I decided to do something else. I would imagine that would be the same calculus for anyone considering that career today.
-
Amazing work and apart from the sheer artistry of it all which is very impressive, there is the mood, editing and pacing, and sound design/music. So much to appreciate and enjoy. What an introduction and very happy you have decided to join our little community. Dave
-
Thank you for taking the time and the initiative to explain how Vue and C4D work together. This I understood as I did read the manual. My issue was vIewport performance in C4D. Now , we know that at that time C4D was not known for its zippy viewport but for some reason, even Vue’s low poly proxy’s brought my viewport performance to a crawl in C4D. The same object would have no issues in Vue’s native viewport but once I opened that same asset (with the proxy safely held in C4D’s OM), everything would crawl from that point forward in C4D’s viewport. That was as far as I got with it. Add a tree and viewport manipulation became unmanageable. This is what I meant by operability being a nightmare in my original post. Drivers were updated, help tickets were created, Vue was re-installed, etc. Maybe I should have reinstalled C4D as suggested but that just did not sit well with me as Vue was not my only plugin. Now this is all ancient history but I think it just comes down to something missing from my PC that made C4D incompatible to Vue xStream. So while happy with Vue stand alone, that whole ordeal (coupled with the lack of updates after the Bentley acquisition and loss of account information from the site hack) really did not make me a believer in Vue at that time. Again… ancient history. Fortunately, recent moves by e-on is making me a believer again. Dave
-
May I direct you to Vue's own version of Cineversity: e-on software Learning Center I must say that the amount of content has expanded considerably since I last checked a few months ago. I wonder if Daniel has any idea why (wink...wink...nod..nod)? What I like about this content is that given the newness of the content, it reflects the latest version. This would be my first choice for learning about Vue and Plantfactory. Another site is Geekatplay™ Studio, Resources for 3D Artists. Tutorials. Unfortunately, the only free tutorials are the oldest tutorials. Another great source for free tutorials is from Nick Pelligrino at AsileFX: (1156) Nick Pellegrino | asileFX - YouTube Again, they may not reflect the current version. What is MOST interesting is that a true master of Vue was Dax Phandi (aka Quadspinner). He was an amazing artist because he studied erosion and weathering patterns terrain and new how to produce those same results in Vue. Everyone one of his tutorials was a Vue master class. Unfortunately, while old Vue tutorials can be found at AsileFX or Geeks-at-Play, every single one of Quadspinner's Vue tutorial is gone. And for good reason.... Dax went on to develop his own landscape package called Gaea . Even when I google "Quadspinner Vue tutorials" or "Vue tutorials Dax Phandi" and follow links that look like they will bring to Vue content, you are only presented with Gaea tutorials. The only thing I can find is his book "Realism in Vue" at GitHub: GitHub - QuadSpinner/RealismInVue: Official open source version of the critically acclaimed book, "Realism in Vue" Dave
-
I do have to wonder why the majority of tutorials on scene nodes cover things you could create much more easily with MoGraph or fields. So maybe the lack of adoption comes from something as simple as this: People watch a tutorial, see a primitive being applied to matrix object and yawn. They walk away thinking "why should I learn a whole new system to do what I can already do with far less effort. Show me something "amazing" I cannot create any other way". Now, "amazing" can also mean "unique". What I see coming out of Blender are some pretty unique tools. While I have watched a few scene node tutorials, I must say that none have ignited a spark of creativity in my brain. I do see more amazing things coming out of Xpresso and that is why I have learned Xpresso. I can also understand why everyone got excited over the prospect of Building Generator being a creation of scene nodes because if it was true, people would be exposed to what scene nodes could do! Interest would have been created as that much needed spark of creativity would have been ignited. The best thing Maxon could do now is to get their best and the brightest together to create something amazing with capsules and splash that all over their news section. I checked---nothing there though Buidling Generator was listed. If that was actually done with scene nodes, then it would have been mentioned. Dave
-
O.....M.....G Well...as an ex-Vue user, this is a no brainer then for getting me back into the program. Congrats to e-on! Dave
-
I left Vue in 2016 right after the Bentley acquisition and the site hack. Those were dark days as Vue was struggling to find their place in the new org. To lose your entire account history AND not see much (if at all) development activity prompted me to stop sending my money to e-on. Now Vue stand alone was a solid app on my system. Vue xStream was not. Operability within C4D was a nightmare. So, when you feel there is no development going on to fix that, and you are locked into to only working within Vue when you were sold on Vue working within C4D with xStream....well....it was very easy to walk away. But that was 2016. Fast forward to today and you see a completely different AND MUCH SMARTER approach: complete and open exportation of all assets plus they are working on Redshift integration. That capability for export only existed at the higher priced professional version which they are now "smartly" extending to the lower cost creator versions. I will be keeping an eye on this page on the difference in capabilities between the different licenses after the 2022 release. Personally, I like the C4D interface much more than Vue's, but Vue is not that difficult a tool to use. Eco system painting is a lot of fun and has slightly more capability than C4D's scattering tools. If I can export eco-systems, which was not explicitly mentioned as being added to the new export capability in Creator, then at $199/year you do have a hell of a lot of cheap capability for creating and exporting fully evolved environment assets into C4D (clouds, terrains, plants, skies) plus the ability to render them quickly with Redshift in C4D. Remember, this also covers Plant Factory and right now the next best option for plant creation is Forester from 3D-Quakers, but their annual maintenance plan is $125 which is kind of expensive. So, this change of Creators capabilities by e-on bears serious consideration given the alternatives. I would start with 1 year just to see how things are working and if Vue is a going concern (e.g., evidence of on-going development and improvement on a timely basis. Sorry, I still have PTSD from 2016). If all seems good, then you have to admit that $600 for a 5-year Creator license is not a bad deal. I would love to see that type of deal with C4D. Dave
-
I am a sucker for these types of plugins as I tend to favor environment creation. But I will wait for the Redshift version to come out. I will admit, I am quite impressed with the work from Florian Renner. He does have 3 plugins now devoted to building/city creation: CityBuilder Pro - Arranges buildings (R21 minimum, Octane and Standard) City Rig - Build and Arrange buildings (R20 to S24, Octane) Buidling Generator - Creates buildings and their surroundings (streets, sidewalks). (R24 to R2023, Octane, Standard, Redshift coming) It appears that Buidling Generator has more capability to create buildings than found in City Rig, but while City Rig can do some arrangements of city blocks, there is more capability for that found in CityBuilder Pro. So, what works with what? Is some interoperable capability planned? Will they all be upgraded to R2023 and work with Redshift or are they meant to be stand alone? Each is a great tool, but you can see that they were all created at different points in time and with different progressions in building creation and arrangement capability. Ideally, I wonder if CityBuilder Pro can work with R2023. Honestly, given how often Maxon breaks plugins with each release I just don't trust statements around "minimum release capability". I have no confidence that R21 "minimum" means that it is R2023 capable. That needs to be explicitly defined for CityBuilder Pro like he did for Building Generator (e.g., R24 to R2023). I suspect Florian is watching this thread as he did check out HappyPolygon's profile 15 hours ago (and joined Core4D right before that) so maybe he can shed some light on the future plans on these plugins and whether or not they will work together as a suite of tools. Some really great stuff here that has the potential to be on par with their more expensive cousins in other applications! Great stuff. Dave
-
Given the size of some simulations, I have often wondered if the amount of VRAM begins to be an advantage over the speed of the card itself. I have done some TFD simulations clocking in at well over 24Gb which is easy to do when you decrease the voxel size to anything less than 1 cm. Now take that simulation and put it in a scene with other simulations (cloth, more fire, etc) and then pump all that cached data to Redshift and you have to wonder what becomes the most significant factor in GPU selection: speed at processing, VRAM or the speed of the CPU and the SSD at moving all that data to/from the GPU. Not knocking the 4090 - it is impressive, but I am just wondering what really the priority is when designing a workstation for handling not just one single simulation but a whole scene of them and then passing it all to the render engine also on the GPU. I ask because I only hear single use examples concerning a specific simulation...not a whole scene of them during rendering. Dave
-
I wish it was a simpler solution, but unfortunately it is not. You will need to dive into Fields, Cloth, XP and vertex mapping. Not sure what else is there left to learn? 😄 Please note that I made a mistake in my explanation above by using the term "UV map" when what I actually meant was "vertex map". I did not realize until I wrote the above sentence. Dave
- 7 replies
-
- Simulation
- Animation
-
(and 1 more)
Tagged with:
-
You would need R2023 and its new cloth simulation tools which have balloon modifiers. You would start with one big balloon cloth and create vertex maps for the pleats between the cushions and then for that vertex map area create a high cloth self-attraction value using fields to pull in the cloth and create the 6 individual pleats. To get all those gorgeous wrinkles in the cloth, you would need a pretty high-density mesh for that balloon object. Now to get the cushion to land where you want it, the only way I could think of how to do this is to run it in reverse. If you simulate it with the cushion perfectly placed in its final position and then run it with gravity pointing up while you take the air out of the balloon you may get what you want. You then cache the simulation. This is where I am not sure, but I don't see why it cannot be done: You then run that simulation backwards to get the cushions to fall down and land perfectly in the chair. Relative to the fluid simulation of the chair appearing, you would need X-particles. Create a vessel that conforms to the chair design, render it invisible and pour fluid particles into it (again with gravity pointing up). That is not exactly how it appears in the video as there is a T2 type of effect going. So as XP works with Fields, you would need 4 attractors to pull the particles to the chair leg locations from 4 separate emitters at the fluids starting point. Each emitter is tied to a specific attractor to ensure that all the liquid going to each leg is the same. Once the particles are at the leg positions; another emitter kicks in at each leg position to fill each leg column with more particles and gravity goes from -Y to +Y until the chair is formed. The seat straps that hold the cushion in place can be done with a spline extrude being animated from 0 to 100. Again, sounds easy but it will be a ton of experimentation. What you may want to do is submit this video to Chris Schmidt's Rocket Lasso website for him to work through in one of his "RocketLasso Live" podcasts where he actually breaks down and duplicates the effects of animations such as this. As this fits right into R2023's new capabilities, he may be attracted to doing it during a pod cast sooner rather than later. Now, he may not get to it in time by when you need it, but it is worth a try. I hope this helps. Dave
- 7 replies
-
2
-
- Simulation
- Animation
-
(and 1 more)
Tagged with:
-
Very interesting video recently posted by Adam Savage (from Myth Buster's and an ILM modeler in a previous life) on making a sci-fi panel. While done for real and not in a computer, the principles are applicable to 3D modeling as he discusses the process in 3 steps: Basic Form Panelization to break up that basic form: with a sub-step of then added notches to those panels to break up their pattern. Adding greebles --- and this step is also done in two passes: first the non-descript smaller panels on top of the larger panels and then the real fun part -- kit bashing from pre-existing models. Very fascinating. Dave
-
Where are the beautiful soft cover quick start guides? Around a year ago, I threw all that stuff away. Broke my heart but you can only hold onto to things you never look at for so long. 18 years of discs, boxes, books etc. all went into the recycling bin. Honestly, it felt more traumatic than was warranted. I think that is when I started to rethink what "perpetual" really meant relative to licensing. Dave
-
Where to find realistic HDR Space textures from the universe?
3D-Pangel replied to Nkzmi Dev's topic in Discussions
I also gave into temptation and created a similar image (but instead used the 2001 Obelisk). One thing about using such a high-resolution image is that what appears to be grain is actually stars - which in itself is kind of amazing - but a real problem for anything beyond a still image. Those individual and tiny stars will flicker when used in an animation unless you increase your ray counts to obscenely high levels. What you would need to do is to create a blurred copy of that image for the luminance channel but do not enable rendering for that background - here it is just a light source. Over that in the color channel would be the full resolution image but with its illumination values turned way down so that the smallest and dimmest stars completely disappear - thus removing the potential for flickering when being rendered in an animation. An interesting comment about the 2001 obelisk. In the book, the ratio of its dimensions (to the 6th decimal place) were 1:4:9 - or the squares of 1, 2 and 3. But in the movie, the on-set Obelisk was built as 1.25 x 5 x 11 - which had a ratio of 1:4:8.8. I will admit that in my own attempts to model the 2001 obelisk it always looked "off" to me. A simple 1:4:9 cube just did not feel right based on what I was trying to copy from the movie. Very frustrated that I could not even model a 1 x 4 x 9 cube and get satisfactory results!!! Pretty confident that Cerbera and Vector never had days like this. Well...now I know why. Dave -
Where to find realistic HDR Space textures from the universe?
3D-Pangel replied to Nkzmi Dev's topic in Discussions
OMG! A 35K x 16K HDRI sky map of the Milkyway!!! How can you beat that!!! Dave -
Where to find realistic HDR Space textures from the universe?
3D-Pangel replied to Nkzmi Dev's topic in Discussions
An interesting product that was just released by The Pixel Lab is a VDB data set of Nebula's. Might be useful if you want to replicate the battle in the Mutaran Nebula from Star Trek 2. Otherwise, a 360-degree sky map works best as given that nebulas are lightyears in size, chances are you won't be needing an animation that requires 3D parallax of the nebula as you move through space. -
Cask of Amontillado nominated at Cannes Shorts Festival
3D-Pangel replied to CApruzzese's topic in Discussions
Wow!! Congratulations. I have never met anyone who had a film nominated at Cannes. You know,.....Peter Jackson started out this way. Just saying. Actually, you are one up on him. He just crashed Cannes, set-up a table and started handing out copies of "Bad Taste" to anyone who walked by. You just have to admire the chutzpah!!! But then again, Steven Spielberg crashed the Universal lot and took over an office without anyone knowing until a producer for the original "Night Gallery" TV series gave him an assignment to direct an episode with Joan Crawford. Not a bad way to start. Morale of the story....I think you should crash Pixar and take over Lee Unkrich's office. (Toy Story 3, Coco) I understand he is not using it anymore since he retired in 2019. Dave -
Maxon will not allow me to extend my Redshift perpetual licence until 2024
3D-Pangel replied to HiFly's topic in Discussions
So my RS maintenance expires on 6/23/23. Based on this the second bullet "You'll be able to renew your Annual Maintenance Agreement up to August 31, 2023" there are two interpretations given my expiration date: 1) As my maintenance ends before 8/31/2023, I can still purchase another year of maintenance on 6/23/2023. - OR - 2) Any renewal of existing maintenance plans will only provide maintenance benefits UP TO August 31st, 2023. Given that my maintenance plans ends on 6/23/23 and Maxon does NOT sell a 2-month maintenance plan, I cannot renew maintenance for two months and get full maintenance benefits until 8/31/23. I believe option 2 is the correct way of interpreting the FAQ page simply because my license page gives me this message: So...am I upset? Not really. Here is why: R26 and I am a hobbyist who really sticks to single image renders. Now that RS CPU is released, I can light, texture and optimize my scene for RS. If I decide to render an animation and need the extra power of RS GPU, then I purchase RS 1-month subscription and move on. I month of RS GPU is a hell of a lot cheaper than annual maintenance costs for hobbyists who only work with single image renders. Please note that this is my own personal perspective as a hobbyist and I understand everyone's situation is different. Dave