Jump to content

3D-Pangel

Contributors Tier 2
  • Posts

    2,864
  • Joined

  • Last visited

  • Days Won

    143

Everything posted by 3D-Pangel

  1. OMG...look at all those gloriously curved compound/complex surfaces!!! Amazing. The only way you could make that modeling work any harder on yourself would be to model it while looking at the monitor through a mirror, putting the mouse in the wrong hand and pointing backwards and completely forego any morning coffee!!!! May we see the mesh? Dave
  2. I have to disagree.....I rather liked the music. The models....not so much. Honestly, if you need an AI program to generate a block table (5 cubes slapped together) or a blocky sofa (more cubes slapped together with horrendous texturing), then you should probably re-think your life in 3D. Also, the AI is being trained mostly for furniture right now...so not quite the Dall-E of 3D. Plus, why even pursue this when photogrammetry programs produce far better results and are infinitely more versatile. Meshroom has nothing to fear. They are also focusing on the food industry as well with this AI program. I find this focus interesting as it seems to be aimed at 3D artists supporting the advertising industry. But both food and furniture are sold on their visual appeal so whatever you create in 3D needs to be spot on perfect and so far, the results are less than mediocre. What a minute! Is the benefit of an AI program creating 3D models of food and furniture the ability to create mashups of the two? Is that an un-tapped demand area in the 3D world? I never thought of that! Here is what the video demo produced when given "Chair with a cheeseburger texture." Enough said. Dave
  3. Not sure why it just couldn't be an unwrapped spherical map that we put on spheres every day in 3D. Apart from the Earth and moon images (which we put on spheres every day), every image in that video is 3D generated. Now, what I think is cool is that this has the makings of being the world's largest Stagecraft installation! Stagecraft is the brand name given to ILM's technique of creating a virtual "volume" using LED screens. What makes all Stagecraft stages a "virtual reality" is that the background is driven by Unreal engine so as the camera moves, that part of the screen visible to the camera is rendered in real time to reflect the parallax shift necessary to give the image a sense of depth. Plus the LED's illuminate everything on the stage so the lighting on the live action stage is perfect and blends with the background. So live action foreground and background on a spherical stage are now perfectly captured real time and no post work is required. Unfortunately, existing Stagecraft stages consist of 70-foot diameter "cylindrical" LED screens wrapping around the shooting stage with a flat ceiling of LED screens over that cylinder that continue the lighting but aren't the best for providing a background. There is always some post work required to blend the merge between the cylinder and ceiling screens should the shooting camera capture the upper corners of the stage. But this massive spherical stage would be the world's largest virtual shooting stage if the LED's screens were replaced with concave versions that pointed inward rather than outward. Honestly, you could point the camera anywhere because it is a sphere and not have to deal with edge blending on those cylindrical stages. Plus, given its size, imagine the sets you could put inside and action that could be staged! Now, the downside of something that big is getting the necessary resolution of your virtual backgrounds rendered in real-time. Would 16K backgrounds be required? Could that be done in real time? Not sure. Honestly, if the Vegas developers were smart, they would have built that sphere with LED screens on the outside AND the inside. If they did that, I am sure filmmakers would come running to shoot there as it would be far cheaper then shooting on location. Dave
  4. I have actually had more issues with USB drives than with other types of drives. They can fail to be recognized by that port if improperly removed or simply because there is some hardware conflict on your PC - which can come from anywhere. I have also found them to be more prone to physical damage than other drives. While this may not be an issue, every connector has an insertion life. For example, DIMM memory and CPU's are around 250 insertions --- simply because they don't want to put more than a few microns of gold on the pins - and you should not be removing them. USB insertion life is around 1500 insertions which is far less than USB-C and micro USB at 10,000 cycles. While 1500 insertions is probably more than you will need it does point to the fact that the USB contacts are not plated as thick as other type of connectors and therefore more prone to failure. Point being: I am more prone to insert and un-insert a USB port on my PC simply because I tend to use those ports more often. USB-C and micro-USB...not so much. Just a thought if the goal is long term storage. Dave
  5. I have been working on a WiP for quite some time now and have been posting renders in my gallery through that period. Now that I am about done, I want to delete those early WIP renders and replace with them the finished renders. But so far, when I go to "Manage by Album" I am left with the only option of adding new renders and I cannot figure out how to both select and delete older ones as the "Manage" function only shows the first image that was ever posted....and even there I cannot find any delete option. What am I doing wrong? Thanks, Dave
  6. Interesting question. The shortest and most accurate answer is "nothing is permanently safe". Over time, any storage media will degrade. Cloud services could go out of business. At home storage media could also become unreadable as technology will constantly change the manner in which digital media is saved (anyone still have floppy disks?). On top of all that is that the software meant to read those files may no longer be supported by future versions of that software (e.g., C4D R8) or an ever-changing OS over time (anyone still running Win97?). Now, you should also consider who you are purchasing your assets from. Do they have a "Customer Zone" where you can download your purchases when you need them? How many downloads do they allow you? Let the company store them for you as their longevity as a company is tied to their ability to protect their assets. Let them carry the burden rather than you. I favor Evermotion and Kitbash3D for just that reason. Kitbash 3D will also upgrade their assets to which you can benefit as they add compatibility to different rendering engines. They once did not texture their assets or have them be Redshift compatible but as they added those features, you benefit. Kitbash 3D now came out with Cargo which allows you to select specific models from their collections, so you do not even have to download the entire collection (easily around 3 to 4Gb). Now, I have issues with their new licensing model, but that is another discussion for another time. But if you want to take physical possession of the asset, then for reasons stated before about the longevity of technology leading to obsolesce, focus on how to keep a file for 10 years as a maximum (and that may be stretching it). But the beauty of that is that you probably won't want to even use that file after 5 years given how much work would be required to upgrade it to the latest rendering engine and/or software. So, if you want to manage the storage on your own, always consider some dual drive array. SATA drives have been around for 20 years and are still available on the market as external RAID 1 arrays capable of being hot swapped. So even though they are mechanical, you have redundancy. Also, it appears that the SATA format is pretty well established so the risk of obsolescence is less than others. Ideally, you would want RAID 1 SSD drives, but as I have yet to find them as an external drive you need to consider internal SSD drives that are hot-swappable. And from personnel experience SSD drives do FAIL so I would still want RAID 1 redundancy, but you need to reconsider a new PC purchase if your system was not already configured for those SSD drives to be hot-swappable. Now, why do I favor external drives or hot-swappable internal drives for "permanent" storage? Well, as an external drive you remove one source of problems in the recovery chain: the failure of your PC (and the probability that will happen significantly increases after year 3). It is a lot harder to recover files from an internal drive that is NOT hot swappable from a permanently damaged PC that won't boot. It can be done, but depending on the age of that PC, it could be a challenge in terms of connection and drivers. Not at all with external SATA RAID 1 drives connected via a USB and not as difficult with an internal hot swappable SSD drive. So, if you want all those benefits without the need to purchase a whole new PC, then external SATA RAID 1 array may be your only option. Now none of these recommendations take into account cost and how those costs compare to cloud storage. There are other threads at the forum on the pros and cons of cloud storage. Personally, I am okay with putting purchased models in the cloud, but I would never put financial files on the cloud...and financial files are far more important than 3D models...so all the concerns of protection and redundancy apply. Not sure I want my yearly tax files with my social security numbers on any document in the cloud. So, if I am going to protect those files locally, I might as well also protect my 3D files locally as well....and that concludes my argument on cloud storage. Dave P.S. Now if anyone knows where to get an external SSD drive in a RAID 1 configuration capable of having either one of the failed SSD drives being hot swapped, please let me know.
  7. Every image from the interior of the Death Star Bay to the widest shot of Death Star exterior was all done to the same scale. That inner bay detail could fit nicely into those far distant bays in the exterior shot. The camera could go from deep space with the equatorial trench in the distance all the way through the blast doors and down the hallway at the back of the bay in one shot. My intent for going 1:1 scale was to do just that: Start with a close-up of a ship as it passes by and then the camera pans with the ship to reveal the Death Star in the distance. After a short beat, the camera then follows it all the way into one of the bays - all in one shot. The goal was to sell the sheer size of the Death Star as you approach it which is a shot you never really saw in any of the movies. Everything was a cut-away to an establishing wide shot showing scale by making the ship smaller rather than a tracking shot following the same ship into something that was huge. Now, the diameter of the Death Star is 200 Km. But some of the ships are only 30m in size and some of the nernies on those ships are in cm's. So as the goal was to show the massive size of the DS relative to a small ship, you needed to preserve all those scales in the same space. So, it was a bit of trick as modeling something that big did create its own problems. Remember, everything in the trench needed to be instanced along the curve of the equatorial trench. So, at a 1:1 scale, the center of that curve was literally half a Death Star away. I do like to see how much I can push C4D and my own ability to problem solve with doing something out of the ordinary...and by my own admission somewhat foolish. As a hobbyist my excuse for the sheer folly of these challenges would be "I did not know any better (like compositing in the ship to the Death Star background rendered separately at a different scale) as I am a rank amateur." Professionals get no such excuses so they cannot throw caution to the wind. They have something to lose if they mess up (their job) and something to win if they do it economically (future employment). I have no such motivations other than to say, "hey it worked!" At over 270 million polygons, I think I pushed the C4D and my GPU pretty hard and found its limits. I also checked CPU usage and even though the file size was only 68Mb thanks to instances, it was consuming over 28Gb of memory. Animating will now be the next challenge.
  8. 3D-Pangel

    Distant Death Star

    From the album: Death Star Landing Bay

    This shows pretty much just how much of the Death Star exterior I modeled.
  9. From the album: Death Star Landing Bay

    Believe it or not, all those surfaces do follow the curve of a sphere at that scale.
  10. From the album: Death Star Landing Bay

    Guns in place
  11. Just found the time to watch the full Q2 live stream. The real time fluid meshing is astounding. And in a domain-less environment to boot!!!!! I know that there is more to do (viscosity, exporting, foam, ocean solvers, etc,) but so far it is just amazing. The real time fluid meshing and rendering against the HDRI is just incredible. Honestly, I did not know if I was watching a LiquidGen demo or a clip from "The Slo Mo Guys" on YouTube.😀 Please also consider a GRAIN solver!!!! Destruction effects in real time would be a game changer because iteration is required as their results can be unpredictable. Be sure to include the ability to control grain destruction with falloff fields and/or imported animated vertex maps. Speaking of maps, what is the thinking behind creating wet-maps? Let the host program do it based on the exported VDB file (based on collision detection) or export that wet-map from LiquidGen? Dave
  12. Just wanted to draw your attention to Matthew Winchell. So here is what impressed me: He loves the old practical way of doing visual effects. That is why I referred to him as an up and coming "VFX" supervisor and not a VFX or CGI artist. He does know both, but I feel he is better suited to be a successful VFX supervisor because he has a much broader base of knowledge to draw from as he knows both the practical and CG way to solve problems. He is only 20 years old. I always thought you needed to be an old fart to appreciate the non-CGI ways of doing things...so his willingness to think beyond a CGI solution is a breath of fresh air. He is a "maker" of the highest order and has been that way for most of his life. Pretty fearless in the projects he takes on as they would be daunting to most other people. Just check out the link above to understand what I am talking about. He saw the ILM presentation on the motion control camera system designed and built by John Knoll for the Mandalorian and decided to build his own. He did such a good job that it captured the attention of the people at ILM. So, he is what I would call a "full stack maker" as he can both design and build the mechanical and electronic components of whatever he sets his mind to. I see many parallels between him and John Knoll in how they started their career path. John Knoll got noticed at ILM for designing and building his own slit-scan system (think of the stargate scene in 2001) while in college. I identify with him because I had that same level of interest in VFX at his age and therefore went to mechanical engineering school at Worcester Polytechnical Institute (WPI) simply because at that time, VFX was all about motion control camera systems and optical printing. So that led me to mechanical engineering. Well....he is ALSO a sophomore now at WPI. If only I was 40 years younger. So, keep an eye out for his name in the VFX trades in the coming years. Hopefully, he will be the SECOND VFX professional to graduate from WPI. The first was Pete Travers (VFX Supervisor at Sony Pictures Imageworks) who graduated back in 1993. For an engineering school (and NOT a film school in any way) there must be something about WPI which attracts people interested in visual effects. You can find an interview with Matthew Winchell here at InCamera about his early start with that motion control camera system. Dave
  13. First off....kudo's for showing some class by NOT revealing the YouTuber's channel. There is enough of that in the world these days. When you mentioned that this person also had Xpresso tutorials I went back to my go to sites for Xpresso knowledge (Xpresso Mechanic and Motionology) and was happy to find that they also did NOT dabble in Python. You have three choices in how to respond to this person: Move on. The person is not charging anyone for these tutorials, so it is not that bad of a "training" crime. I know you are rightfully concerned about the proliferation of bad practices and incorrect information but if you are a serious student of Python, you are more than likely NOT going to rely on one person for all your Python knowledge simply because no single person can provide the full range of training required to master all of Python. So, the student will have to find other tutorial sources and, in that search, will find out the "right" way to do things. That has happened to me but with 3D training. Ultimately, you begin to separate bad from good training and you delete the references to the bad trainers. The longevity and growth of channels like RocektLasso and Eye Design is based on the quality of their lessons. The bad ones simply never last that long. Hit the Dislike button and move on. See comments above. At least here your negative approval provides some warning to potential new students, especially if you are NOT the only person given their site a thumbs down. And who knows, the author may reach out to as why.... doubtful, but it could happen. Post a comment that does NOT attack the author but directs students to other sites where there is proper Python training. To keep the comment alive, start out with "If you liked this training, then you might also be interested in these channels (none of which are mine) which could also be a great help to you". Least preferred option as I am not sure how it could NOT invite pushback but if you feel you must do something, then this is about as far as I would go. Attacking the trainer will drag you both in the mud. I would hate to see that happen given your reputation. I consider you an expert and have always been very appreciative of your help and the depth of your knowledge. I would hate to see you do anything to damage the "C4D Jed" brand name identity that you have built up over the years. Just my 2 cents. Dave
  14. What makes an off-site immune from fire, pipe breaks, etc.? Not sure halon fire suppression that any reputable cloud storage facility should have is the best thing for SATA drives. Nevertheless, that should not be your main concern. How many times have you heard that some major data center got hacked and the personal information for millions of people are compromised? Hackers are not that interested in my little 6TB drive but I have had my bank issue me new credit card after credit card because some retailer, financial or medical group, etc data center got hacked. I fear hackers more than I fear a pipe burst or fire in my home....hell...if I was that nervous about fire or flood where I live I would do something about it for my own safety more so than my hard drive's safety. Dave
  15. I use WD's My Duo external hard drive configured with two hot-swappable 6TB drives configured for RAID 1 (redundancy). Now unless you are paranoid like me about putting everything into someone else's hands with cloud storage (and the horror stories on this thread seem to support that paranoia), then cloud storage is the cheapest solution as external RAID arrays are not cheap. Also, you can access these files from anywhere with cloud so obviously better - no argument there. But I enjoy the peace of mind that I have full access to my files in all situations while they are still safe and secure via redundant back-up, I am very happy with external RAID arrays. Sometimes you can't put a price on peace-of-mind. Dave
  16. The only information on GeoGen I found (though I did not check every possible social media outlet) was that 10 seconds of video (the rest was sound effects). From the frame that was captured, I saw an interesting feathering to the smoke that was not typical of your standard fluid simulation (the area in question was circled in red ellipse). Again, being guided by only that 10 seconds of animation which started with a voronoi fracture and transitioned to a neat smoke effect, I took a guess. Not sure if that smoke feathering effect to particles was part of GeoGen or already native to Embergen (I even said as much), I threw caution to the wind and hoped that it was a means to get large scale fluid simulations - something that (to date) is really only found in Houdini.
  17. Getting the particle meshing right is a huge step into realistic fluid simulations at all scales and I must add that I found this animation very nicely done. Not sure even Houdini could show results like these in real time. Had the animation not jerkily shifted viewing angle during the playback I would not have realized that this was real time viewport performance (maybe even the developers were in awe and forgot move the viewport while recording the demo). So again, kudo's to that plucky little development team at Embergen going head-to-head with the PhD's at Houdini (Full disclosure: at one time I recall an interview where Side Effects was touting the number of PhD's on staff though I could find nothing to substantiate that claim on the web). Now, with that said, and a quick check on the Embergen site for current pricing, I found a reference to Geogen with a beta video released 6 months ago but promised for 2023. You have to give it a bit of time to play out but at the one minute mark, you will see this frame on the left: So I sincerely hope that GeoGen provides a little bit more than just another Voronoi fracturing solution and takes a step at achieving large scale fluid simulations. That would be huge. To do that in REAL TIME would be mind blowing (IMHO). Dave
  18. I have a love/hate relationship with both books and tutorials. Books are great because you can proceed at your own speed in a very linear fashion. You don't have to hit pause and rewind to review a point. This is especially true if shortcuts are used as sometimes in a video tutorial you will get a rapid barrage of short cut commands being blasted at you by the instructor (Ctrl U, then swipe left, then LMB and select ALL, followed by Ctrl C.....on and on). When reading a manual, that sequence is usually written down for you to see. The downside is that should your results differ, there was no visual feedback provided as you follow the written sequence to tell you where you went wrong. The better video tutorials put every command on the screen as they executed including short cuts. So, nothing is lost but there is a lot of pausing. Video tutorials are great because there is only so much you can describe in words and as a visual learner, I like to see what is being taught as it is being taught. That overcomes the problem mentioned above of not being able to determine where you went wrong with written instructions. But sometimes the instructor is proceeding under some misconception that shorter tutorials are more marketable, so they just blast through the subject matter. "Hey....go fast because the student has a pause button" Honestly, overuse of the pause button as you watch a video 10 seconds at a time can quickly create attention fatigue as the forest is quickly lost through the trees when you have to watch in such small bites. I could never get into modo because 3DGarage videos were that way plus it was all shortcuts and they were not displayed on the screen. Now, both written and video tutorials are horrible if a detail, step, command option is left out. Sometimes that happens in both and when that happens you do lose confidence in the instructor and the book or video. You simply stop following that instructor when you realize you wasted a lot of time only to end up at a dead end. Overall, I favor video tutorials especially those where the instructor is proceeding at a moderate pace and the shortcuts and commands are printed on the screen as they are speaking. The best video tutorials also show where you can go wrong, why one method is better than another, and a little bit of the logic behind the command and/or how the tool works. That part I love because once you internalize what the program is doing you can quickly master the function. Bob Walmsley at Insydium is that type of teacher --- one of my favorites. Too bad he did not teach modeling. My other favorite is Hrvoje. Great instructor. Too bad he no longer has anything to do with the forum. 🙄 Now with all that said, not sure about a modeling book using a different application (in this case modo). It all depends on how the book is structured. If it focused purely on common problem-solving techniques (e.g., how to resolve an edge loop issue on an interior curve with a triangle in the corner) in a step-by-step visual manner without describing the commands, that may be a good resource. You just have to hope that your program has the capability to match some of those steps or you will get lost going from A to B in the instructions. But at $48, I am not willing to take that risk. Better to just buy MILG 11. Dave
  19. Really loving this thread. For example, I never would have known that Particle Illusion is now available as a free download for its standalone version. While you can't use it for anything outside of playing with the emitters, that alone is enough for me because it is just soo much fun to use. I also like finding out about all the new software that essentially is using AI to completely remove the more mundane aspects of the VFX pipeline: camera tracking, motion tracking, rotoscoping, etc....the list goes on. But as mundane as those tasks are, they did make you appreciate the effect that much more simply because you realized how much work went into it. Now, you just sit back and press three buttons and you are done. So, while AI won't be the death of the VFX artist (though it does raise the bar on creating standout effects) your appreciation of a VFX scene will change. Think about how we were all blown away by that opening shot in The Empire Strikes Back of the camera move down on the running Taun Taun over the Hoth snowscape. No camera tracking software was used, just sheer artistry, ingenuity and back breaking work. If that shot was done today, we would not give it a second thought because tracking software is so mature and readily available. Oddly enough, I also wonder if AI will change how we appreciate the people who develop software that we use. To continue with my theme on camera tracking, will we still marvel at camera tracking software that can create flawless tracking shots of scenes shot out of focus, or with rack focus changes in poor light, lots of motion blur and no discernable stable points to use as tracking markers when we know that some AI algorithm did all the work? Sure, we may marvel at the AI algorithm itself, but AI is just becoming this ubiquitous monster that gives the impression that the software created itself through its own learning algorithm. Scientists, mathematicians and software developers sitting down and cranking out the math, building the logic flow and iterating for days and even years to refine the software to a stable piece of magnificent code are now upstaged by an AI algorithm that did all that work in 3 minutes. While we all may marvel at AI, I think it just cheapens the whole effort of creating anything digital: from software development to finished image. There used to be an old cartoon of scientists reviewing a pretty poor algorithm on a chalkboard that I have updated below to emphasize this point: Enough said...go render something..... .....before it is done for you by your AI surrogate. Dave
  20. From the album: Death Star Landing Bay

    Down to adding the fine details like the Laser cannon that I did (and currently available for free download)
  21. I was knocked out with the polar bear ads from long ago. The painterly effects were very good (NPR of live action scenes), but too subtle and little too jittery in some cases (IMHO). So subtle that you barely noticed them in some shots but that may have been the point because they were trying to match the realistic work of the great masters. As for me, this scene from "What Dreams May Come" still knocks my socks off....and it was done 25 years ago: Advancements in optical flow technology (tracing the flow of each pixel in a moving image) was the groundbreaking science to make this scene possible and ultimately earning the movie the Oscar for best VFX. Read more about it here. Dave
×
×
  • Create New...