Jump to content

3D-Pangel

Contributors Tier 2
  • Posts

    2,864
  • Joined

  • Last visited

  • Days Won

    143

Everything posted by 3D-Pangel

  1. Mjolner and Houdini have found you "worthy" Really great accomplishment. I mean that is not a trivial object to model in any application for a newbie...especially Houdini. So I am definitely impressed. Congratulations! You have crossed the Rubicon between C4D and Houdini and are now safely on the other side! Maxon and C4D will soon be just distant memories. Dave
  2. Let me clarify a few points: "-I don't understand your concern with the architecture and future growth" As I said, "unless the core architecture is absolutely brilliant" at some point the inclusion of so many disparate features (grease pencil, post processing, and a fully featured 3D app with all that entails) at some point is going to impact the pace of quality management which in turn is going to impact the pace of development. I mean, I marvel at what the developers are doing today and the pace of change. That is why I suspect that UI improvements to the Object Manager, which is the connective tissue of any 3D program, are the hardest and most difficult thing to improve. Someone nailed why I have such trouble with Blender in that there is a disconnect between the Object Manager and the Attribute manager (to use C4D terms) when it comes to materials. I swear that has to do with how fast they are growing the program without giving just as much consideration as to how every new feature integrates with ALL the managers. I am keenly interested in what Blender 3.0 delivers as I do sense that it will be as drastic an improvement to Blender's usability as was the change from 2.79 to 2.80. "-The UI is mostly quite elegant, IMO. But it's a different tool than c4d. Seems to me you've been wrestling for two years now. You don't like where c4d is going, don't like their treatment of customers. But you can't let go. You are asking other apps to be c4d. The reality is that you have invested THOUSANDS of hours using c4d, watching tuts. You expect to have the same comfort level with another 3d app with just a few casual sessions?" Fair point. I even said as much with "Does C4D create a blind spot that prevents you from moving to another platform?". I have invested a lot in C4D - both the program and its eco-system of tutorials, plugins, models and libraries over the years. Honestly, what keeps me in love the with the program (other than the interface) is X-Particles, Forester and Redshift. Should Insydium ever make a version of X-Particles for Blender (as they are already very familiar with Cycles, I hope this is something they are considering), then that makes it just that much easier for me to leave C4D. "Might I ask you honestly: how many hours have you spent in Blender in the past 12 months?" To be honest, not that many - other than checking out the newest releases and playing around a bit. Most of my free time is with learning Redshift and X-Particles and converting a NoneCG model of Times Square to C4D for that company (a passion project that is massive: over 31,000 objects and over 2Gb in size). Again, more investment in C4D because once that is done, it just makes it harder to walk away. Look, CG is NOT my day job and my day job is also my night job too as I do a lot of night time calls as well (the downside of working with an international company in a central role). So time is limited. I guess I need the Blender equivalent of 3D-Kiwi's Blue airplane tutorial -- short but really get's you through the ALL the basics. "-Trust me, Maxon, Autodesk and Luxology have already felt a big bite in business lost from Blender." I certainly hope so. Nothing changes behavior better than competition.
  3. When you watch a video by Ian Herbert and see how fast he works in Blender and how he can make the amazing just happen without effort, you wonder if you could ever get to that level with such a clunky interface. In fact, with each of his new "mini-tutorials for lazy people", I question if it really is the interface's fault for me not picking up that program faster or do I have some mental condition that impedes my learning. Does C4D create a blind spot that prevents you from moving to another platform? Probably not, but I do find the transition very difficult. Also, unless their core architecture is just absolutely brilliant, don't you get the feeling that at some point that architecture will NOT be able to keep up with the rate of feature growth? Could that explain why menu navigation is just a huge eye straining exercise? Can this pace of tool innovation continue before it all just collapses under its own weight? Personally, I think they need to shift gears spend a little time on interface. 2.8 was great but now that they are capturing global mind share, the best way to capitalize on all that is to make the program easier to use. Has any of their "innovations" extended to things that we take for granted -- like a texture manager. I think I tripped over Blender's version of a texture manager but it was not intuitive. Also, why is plugin loading/management so difficult? Too many steps IMHO or is my C4D blind spot getting in the way. And wouldn't you just love an object manager like C4D's in Blender? Nope...no innovation there. So when does the pace of Blender innovation extend to the UI? Who are their UI designers and do they have a UI standard that all developers must adhere to? Is there any role at the Blender Foundation that is responsible for improving the user experience? Or is user-experience 100% met by the shiny new features that they pack into each release? How does Blender Org treat new users....and by new I do not mean new to CG (for example, they would not know any better when using parametric primitives for the first time) but experienced artists trying Blender for the first time? Is that a segment the Blender Foundation wants to grow or is it already growing for them? Do they want it to grow faster? Honestly, if Blender decided to say that UI and improving the NEW user experience is now their number 1 priority, I think that would really make other companies like Maxon, Luxology and Autodesk feel a little bit nervous...or at least more nervous than they are now. Dave
  4. 3D-Pangel

    INSYDIUM MeshTools

    Honestly, there is not much to impress with the sneak peeks so far. The UV advection and Foam capabilities added to the OpenVDBMesher are welcome additions but (as stated before) Meshtools is not that exciting (yet?) and I see no point to the xpOpenVDBImporter. Talking about xpOpenVDB Importer....isn't OpenVDB an output from XP? What is to be gained by importing? For example, would entire VDB simulations now be able to be handled like any other particle in XP? Nice idea, but I can't imagine a PC/workstation and/or render farm being able to handle VDB data sets to the same quantities as what XP could generate with particles. Thousands and tens of thousands of VDB objects being added to scene would kill most machines. Smaller quantities (around 100 or so) is more likely and probably just as easily handed by MoGraph or Fields. So not sure what the benefit is here. So 2 out of 4 is sneak peeks generating "some" level of interest so far feels like a 0.5 release rather than a full point release. It does make you wonder what else they have been working on. The only thing that would make sense is that all their energy has been directed to GPU acceleration judging by these lack-luster sneak peek videos so far. Now that would be exciting. Dave
  5. Hah! I wondered if it was you. Now I get it. It is a U-Render asset and therefore warrants the U-Render handle. Regardless of who created it, it is still an impressive render. Dave
  6. Wow. Just plain wow. I love it all (texturing, lighting, DOF, and modeling). Is there some SSS going on as well? Was this done using U-Render or is that just your handle? Dave
  7. 3D-Pangel

    INSYDIUM MeshTools

    This looks to be a better and more power take on the work of Merk Vilson (topoformer, etc). I even checked if Merk Vilson is now employed by Insydium (he is not). If it is a separate plugin, I would need to hear more about it as right now as right now the couple of examples provided in that video are not that compelling...interesting but I would need to understand how applicable and extendable the tool is to creating a variety of techniques and looks. With that said, I am confident that by the time Insydium is done with it (this is a sneak peek after all), it will be amazing. Dave
  8. 3D-Pangel

    INSYDIUM MeshTools

    Interesting. Is this a whole new plugin or part of X-Particles? I would imagine a whole new plugin but there is nothing about it at the Insydium site.
  9. So do you want your scene to be rendered in R20 with RS or use C4D's material nodes? If you want to keep using RS, the current version of RS can run in C4D R19 or higher. For the scene created in a higher version of C4D (21 or higher as you stated), why couldn't you export it to FBX and then create an RS library file of all the materials. You then load RS onto R20, import the FBX file and then load up the library and apply the RS materials? If you want to use R20's material node system, I really don't think there is a quick way to port RS materials to C4D material nodes....other than taking a screen shot of the RS node set-up for each material for you to follow in recreating it in R20 material nodes. Dave
  10. Just wondering if there is any value to this strategy for using and staying current over the long term - especially if you have R23 perpetual. as R23 was a pretty good release, especially in the area of cross platform integration and animation. Purchase and keep current with Redshift, Octane and/or whatever 3rd party renderer you prefer. Really, from a "features" perspective, rendering improvements usually deliver the biggest gains to most users. For me, I love Redshift and its upgrade costs are only $250/year. Use Substance Designer for all your texturing and mapping needs. Indie license is $240/year. I am now at $490/year total. Also, Redshift can read .sbsar files so those two play together. Should you need a new feature with the latest version of C4D, then purchase the monthly subscription (billed monthly) for the latest version. You could do that for 9 months out of 12 and still be ahead of the annual perpetual license upgrade cost. When you have what you need, export it to FBX and alembic to bring it back into R23. As I am already into this for $490, I could do the monthly plan for 4 months out of the year and still be ahead of the perpetual license upgrade cost. So with that mindset, what have you really lost? True, there are some convenience issues you have to deal working with multiple apps but you actually end up with better tools like Substance Painter, Substance Source, Substance Alchemist and Redshift - resorting only to the latest version of C4D for a brief period of time should you need a new tool -- and you bake out the result for importation back into R23. Of course, this is all so we can stick with the interface and stability we love the most. You could always follow a fourth option: Chuck it all, bite the bullet and live with a quirky interface and use Blender. Just a thought. Dave
  11. Yeah....still not a lot to go on. BTW: That Cinefex issue you described is from Issue 14 on The Right Stuff and it was the Mercury Space capsule on the front. The movie Apollo 13 was highlighted in Issue 63. I know my Cinefex having all 172 issues as I have been subscriber over the last 40 years. Hearing they closed shop was like hearing you lost an old friend. Dave
  12. Interesting. The modeling to fit all that into a model in a way that makes sense is very impressive. Personally, I never get excited over cross sections of fictional crafts of any kind. They are fictional after all. But I think that extends to my overall relationship with any fictional universe or world building. I love the art. I love the imagination. I just don't love it enough to invest in learning enough details about it as if it was a real world place. Now, if someone wants to model the interior of a Star Destroyer in real world detail (textures, lighting, etc) such that it can be used in an animation then I am definitely interested. For example, just look at the this model of the Star Trek Enterprise made by Neil F Smith on Turbosquid (and in C4D R20 no less). He modeled about 75% of the interior of the Enterprise as well in render ready perfection. Plus it is fully rigged. Now that is an "inside view" I can get excited about. Dave
  13. Can you describe him? Thin/Fat/beard. Also, how large is that visual effects company. Do they do complete features or were they a boutique shop that worked with other vendors on a show. Also, did he work at ILM or did he just work for Lucasfilm as all you said was that he was at the Ranch. After 40 years of reading Cinefex, I may be able to piece together some better clues for you as that magazine was very good at mentioning all the key creatives on every movie they covered from pre-visualization to final effects. But just being impressed by Jurassic Park, playing with computers, etc. is just not a lot to go on as that probably describes 75% of everyone in the industry today. Dave
  14. Look...Unreal is becoming a force in the industry and the partnerships between Unreal and the DCC apps are growing. But what is interesting, is that the partnership is around the workflow from the DCC app to Unreal and NOT getting real time rendering into the DCC programs. And this partnership is even being funded by Unreal. Nine months ago, Maxon was awarded $200,000 USD by Epic games to improve workflow integration between the two programs (read here). I am sure with that grant comes deeper insight in to and modification of Unreal version 5's API to so that C4D models can be better imported to take advantage of Nanite and Lumen....but in no way does that sharing include how Nanite and Lumen could work in C4D. This is a smart move by Epic games and essentially sets the boundaries - you work on modeling, we will work on rendering and we both agree NOT to compete in those areas. That non-compete clause could easily have been part of the grant -- again, a very smart move by Epic, especially when you realize that Epic is not just doing these deals with Maxon as they have set aside $100 Million for these grants (read here). So don't hope for Maxon to start rolling out real time viewport performance that rivals Nanite and Lumen any time soon. If that is your heart's desire, then consider U-Render Dave
  15. Just gorgeous. I don't care how you get there or for which purpose, the ability to create images such as this in real time (or close to it) is the ideal case. Dave
  16. Well....you and I are both right (in a matter of speaking) because Signed Distance Fields (as a mathematical concept) really just involve taking position as an input and outputting the distance from that position to the nearest part of a shape. Raytracing does the exact same thing, the difference being that when it calculates the ray from one surface to another, the secondary surface passes information (color, light intensity, etc) along that ray back the ray's point of origin from the originating surface to use in the rendering calculation. I would imagine SDF just does some form of approximation at that point. In short, all renderers have to mimic the real world properties of light and light is always bouncing from one surface to another. Whether the ray is cast to the surface or their is some approximation of how a surrounding surface's color contributes to the current polygon being rendered - via that surfaces distance, color, normal vector and angular position to all light sources - there still has to be some form of "vector calculation" going on involving a vector from the polygon being rendered to its surroundings. Whether or not that calculation is based on a ray or some distance field, essentially we are talking about vector calculations. So whether it is rays in a ray tracer or vectors in an SDF render, the mechanics of what is going are pretty close in my mind. At its core, as the real world works in rays, so no matter how you want to approximate it, you are calculating angular position and distance from each polygon to its surroundings for every pixel being rendered. And you can't say that SDF does not use rays (or vectors) because the camera has to shot out some form of vectors or rays so that the renderer knows what is in view or not. I would imagine the noise comes from how many SDF or ray calculations you want to make per pixel being generated. The real time shadow demo you created in the first video shows that noise and that noise looks like any other type of noise being created in a raytraced algorithm. So getting back to my original question: how do you control noise in Unreal Engine? Dave
  17. "It all ends up as triangles in the end" ....please don't tell Cerbera. 😄 I wondered about how well Nanite deals with noise. Is it a biased renderer? Can you control the number of secondary rays for reflection, refraction, GI and/or SSS? I would imagine not as that would kill the real time rendering capability. So how do you get noise free renders out of a game engine? For me, that is where the rubber hits the road: managing noise. Rough surfaces like desert rocks, ancient runes, etc probably hide noise better along with motion blur but there are racing car games so at some point smooth reflective surfaces need to be dealt with and that may be what is pushing Unreal 5 from a 2021 release (as originally projected) back to 2022. Not sure. Dave
  18. I have been extremely impressed by Nanite and Lumen for quite some time now, ever since their first demo. So given all that this software can do, I have to ask what does this mean for the traditional users of non-game engine DCC tools? I mean if the goal is to create realistically rendered images with physically accurate lighting, why should we be using anything OTHER than Unreal? I mean the darn thing only needs the power of a PS5 or X-Box to run. How many PS5's could you get for an RTX-3090? We pride ourselves on not creating triangles and Nanite is nothing but triangles. Have we been wrong all along thinking in terms of quads, ray optimization, big GPU's and full featured (and expensive) DCC software? Should we just embrace the metalness workflow and only create using game engines? Okay....sarcastic rant mode off. But the point remains valid. Just look at the investment you can make in the traditional DCC approach: Workstation level graphics cards, tons of memory,3rd party rendering software, fast CPU's. You could build a massive workstation, plunk down thousands in hardware and software and still never get the real time performance that Unreal is giving you. So why not just chuck it all and build your entire pipleline around Unreal? Hell, with Unreal all you need is a good modeler (modo), substance painter and a pretty decent gaming computer. Plus now Unreal directly links to Quixel megascans for FREE! That puts C4D's content browser to shame. So what is the advantage to NOT using Unreal for all your creative needs? I ask that as an honest question. Dave
  19. Originally, I was thinking DasFrodo was being a bit too harsh but when you look at the amazing quality and mood set up by the lights and the falling net in comparison to the middle portion (tennis ball cans jumping to attention), I have to agree that they look like filler. They don't fit in to what looks to be an inspired work of art an animation. Honestly, don't feel constrained that the entire piece has to be 100% CGI. Using instead equally well lit shots of tennis players smashing the ball back and forth in ultra slow motion, the sweat flying off their hair and all starkly lit with a harsh key light would do far more for the mood you started to create at the beginning. Jumping to a mograph insert of tennis ball cans just does not fit in. With all that said though, I do stand in admiration of your skills. The first 38 seconds were outstanding. Dave
  20. Welcome Bendik! Just wondering as you work in Oslo, have you ever heard of Gimpville? They are a VFX house in Oslo that does some pretty outstanding work. Dave
  21. That is awesome news as I love to see top notch developers collaborate. I mean, XP did release xpScatter in 2020 for distributing objects over a terrain so you can see where this might go with Terraform as well as what Franke can do for xpScatter as he did develop SurfaceSpread after all. Plus Cycles 4D just implemented Sky Texture Nishita for more accurate spectral illumination of skies based on the sun's positions (eg. low sun creates that reddish glow of a sunset). And you can always use ExplosiaFX for clouds, etc. So I see a really good collaboration in the future especially in the area of landscape and natural environment creation. Something I have been hoping for since Vue lost its way in 2016 and then made the (IMHO) disastrous decision to regain their market two years later by coming back with a subscription model. I did have hope for Forester and their development of Rock Engine but that has just gone silent except for the fact that this web-site placeholder was created in 2021: Rock Engine | Nature Redefined (rock-engine.com) My fervent wish is that Insydium buys 3D Quakers and now they have everything they need to create a very powerful suite of applications that rivals WorldBuilder and Terragen....simply because those applications do NOT have native fluid modules. xpTerraform....xpForester...xpWorld.....just keep placing "xp" in front of everything🙂 Dave
  22. I honestly don't think you would hold a delivery more than 5 months just to make a pun. In regards to your other questions, the book is 600 pages, I got it yesterday and I have a day job.....I will let you know. But based on the reviews at Amazon I don't think anything will be left out from abandoned story ideas to the technical challenges. Remember, at the time doing everything digitally was a risk and one that Lucas did take a lot of criticism for but still he had to lay the foundation for the digital production world we have today. Remember, it was written from a 2020 perspective so it should talk about how what we take for granted today was a huge challenge 20 years ago. That has to be a good part of the book's narrative and the reviews do give that impression. Reading about those technical challenges is entertainment to me. As I said, I really don't get caught up in the Star Wars mythos so books on the floor plans of their vehicles like you recommended really don't interest me. I just want a good story that makes sense and characters I can root for. That's about it....unfortunately, I can not say that about the sequel trilogy. Dave P.S. I noticed that the price of the book has gone up $30 since September.
  23. My Christmas present from both my daughters finally arrived today. It was a little late to be sure but only because there were problems getting it released since printing started back in September, 2020. I guess they really had some problems because I have number 624...and it has been almost 8 months since they started printing. For those that may have read my profile, you'll know that seeing Star Wars at the age of 16 when it first came out really cemented my love of visual effects. I really had no desire to be a Jedi, but would have given anything to work at ILM. In fact, reading about motion control camera systems and 4 perf pin-registered optical printers is what made me realize that I really love mechanical engineering...and thus set me on my career path to where I am today. That love soon spread into computer animation and this book dives into everything from the Lucasfilm archives for Episodes 1 to 3. So pouring through this 600 page massive book (it is 12" x 16.5" x 2" and weighs 15 pounds) will be nothing but pure inspiration while reconnecting this old duffer with the dreams of his youth. And best of all....it arrived on May the 4th! Dave P.S. If you don't hear from me for a few days, you'll know why...either I am reading every page or the book fell on me and crushed me to death.
  24. Is it just me or does Intel look nervous when you see this web-page Dave
  25. I would not count out X-Particles just yet. When you compare TFD to XP in terms of rendering fire and smoke you have to ask what renderer is being used for XP. Is it Cycles, Redshift or Octane? What makes XP infinitely better than TFD (IMHO) is the ability create VDB files for rendering in 3rd party programs like Octane, Redshift, etc. You now have more power/control to render fire and smoke than using TFD's built in rendering solution. For example, the ability for me to control the amount of fire and smoke in a simulation is a lot easier to control in Redshift than in TFD....or even Cycles for that matter as I find that method a bit too complex. Now TFD does not create VDB files by default but its own BCF format and to get VDB output from TFD you need to run bcf2vdb from the COMMAND line prompt. Why so difficult? Why can't it just be an option within the program? Not sure. Another thing I love about XP is that it is a multi-physics simulation solution. Water can push cloth. Water can catch on fire. You have flow fields that can then impact volume breaking. You have grains that can be impacted by advection. Just a whole host of solutions. Now you do have multi-physics in Realflow, but Realflow is engineering grade - and really slow - and expensive. Every particle carries a ton of information and the file sizes are huge. It is GPU accelerated which helps, but all that aside, the maintenance/upgrade costs are still too high for me (close to 50% of the purchase price). Now, every fluid simulation solution out there is GPU accelerated except XP....so it stands to reason that XP needs to incorporate GPU acceleration to stay competitive. I have confidence that they will do it. Once they do, then (again in my humble opinion) they become the most versatile, powerful, easy to use solution out there that can produce amazing results when paired with a 3rd party rendering solution for the money. Dave
×
×
  • Create New...