Jump to content

3D-Pangel

Contributors Tier 2
  • Posts

    2,872
  • Joined

  • Last visited

  • Days Won

    146

Everything posted by 3D-Pangel

  1. You should have posted them sooner. Really nice work!!! Great imagination. Dave
  2. Agreed. I was specifically addressing adding lights for scale. Interesting factoid about the porthole lights in the Death Star trench from Star Wars. While the components were made from casts of about 6 or 8 individual tiles laid out in various combinations, they realized that they needed to add portholes. Well, rigging fiber optics to around 40 feet of trench would have been a monstrous task. So, the next best solution was to glue reflective little pieces of highly reflective front projection material to the model which were filmed in a second pass. And there were thousands of those little squares. Well, after it was all done, I think it was Gary Kurtz who looked down the trench through a camera and said "The windows need to be smaller", Ouch! Dave
  3. I would agree that the model does provide the correct proportions for replacing with your own model. I do spend a lot of time researching the internet for photo references. Finding orthographic views of completed models is rare enough but you will never find orthographic views of each of the major components of those same models in their proper positions (hmmmm.....now wouldn't that be a neat product and pretty cheap to produce if you already had the model). So even a crappy model at a really cheap price is the next best thing. When you think about it, it is the 3D equivalent to painting by numbers? Plus, if the modelling had been perfect, I might not have been motivated enough to go in and break it down for rigging and animation. But if you are breaking it down already piece by piece to essentially replace everything, then it provides that opportunity to get the axis correct for the animation controls you want to put into it. Dave If I had your skills, then I would have felt the same. And you would have done an amazing job!!! Dave
  4. So in the development of my on-going Death Star Bay, I need a laser cannon. As modeling the hanger bay and now the exterior of the bay was already pretty daunting (and as I do suffer from the dreaded "PWFS - or Personal WIP Fatigue Syndrome", I decided to see if there was not already a model out there of a laser cannon I could purchase. Lo and behold I found one on Turbosquid! And it was ONLY $2.80 USD. Such a deal! Well...until I loaded the FBX file into C4D WARNING: The following photos are disturbing in nature given their graphic depiction of corrupt geometry. Viewer discretion is advised. At first, not too bad but think of it like purchasing a used 1967 Corvette where the body looks pretty good for its age and you feel you may have gotten a great deal. That is until you look under the hood: Honestly, you have to put in work to corrupt geometry this bad. This can't be a conversion issue because it looks like random cuts were made all across the model. The model just looks like it was abused by the author. US DSS (Department of Substandard Surfacing) officials have been notified. Close to 5000 polygons for such simple primitive shapes too. Overall, there were over 175,000 polygons in this gun alone. As there were going to be more than one cannon in the Deathstar WIP (which alone would be a high poly model to begin with) something needed to be done. So, I got to work. And honestly, it was a joy. It just became addictive to clean up such bad geometry and simplifying the model at the same time but without losing important details. Maybe I have an OCD when it comes to triangles. Not sure but I found the whole thing strangely therapeutic. Getting there.....more to do: I have already added some Xpresso and rigging controls to control the guns and turrets. Then I will work on the controls to make the guns recoil and a green volumetric muzzle flash appear. So this $2.80 USD model is providing a wealth of entertainment. Dave P.S. Okay Cerbera. The frightening images are over now. It is safe to go back to moderating!!!! 😁
  5. A quick note on lights and JSplacement. I love JSplacement. Just outstanding. But be carefull when designing how you want the lights to be laid out. Remember, if the dots are being used to create portholes, their purpose is to define interiors and therefore provide a sense of scale. Therefore, their layout should NOT just be random because designers would not lay out windows randomly. Structurally, they would also not design windows in dense grids but rather along linear rows either vertically or horizontally. For example, in a past unfinished WIP (which I will get back to someday). I was working on the spacestation from Star Trek III. You know, that one that looks like a giant mushroom. Well, when it came time to place portholes on the cap of the mushroom, I used JSplacement to generate the texture and started with just random lights. It looked horrible. Realizing where I went wrong, I could not get JSplacement to position lights randomly across a series of rows as if there was some intelligent design process at work. Light positions with "intention". So I had to create my own "light make" in C4D light maker_p2.zip This was used to create the following base luminance texture: Which when applied to the model, looked like this: I also think it was Vector (or maybe Cerbera --- not sure but it was from a master IMHO so apologies if I am giving credit to the wrong person) who suggested that using that same texture to create an inward bump with a normal map helps ground the light onto the model a bit better rather than having it look like they are floating on top of the texture I really loved working on this. I think it was modeling the interior where I just lost my energy. Now I am trying to model the exterior of the Deathstar but have not lost my energy on that yet. Again, how to keep your creative energy going on really huge personal projects would be another good thread. Dave
  6. Ahh....thank you! Glad to know I am not unique in this situation. Here is why: I have been plagued lately by hardware and software issues within my own network. Connection to the internet and some driver issues on my PC have made me doubt EVERYTHING. Even my wireless keyboard and mouse were locking up or sticking keys rapidly pushing out long uncontrollable strings of the same character. So what time I could get on Core4D needed to be spent efficiently because you just were not sure how long you had until the network went down. So that is what prompted the post about being able to quickly triage the home page for new posts. Dave P.S. On the plus side: I think those hardware/software issues have now passed (I hope). After weeks of working with Xfinity support and two replaced modem/routers, Xfinity finally sent someone over to the communications hut that serviced my neighborhood to specifically address my connection. Lo and behold, they found a problem. Not sure what it was but things have been better. I also found the driver conflicts caused by a Win 10 update which would cause the entire PC to be unstable. One crash when the screen went completely black (no BSOD this time) was the symptom that it was a video driver but on top of that was a corrupt USB port which is also where the unifying receiver for the wireless keyboard and mouse were plugged in. Ouch!
  7. I use Edge and after incrementally decreasing my screen size down to 67%, it never showed up. I am running Window 10. I then tried it in Chrome with similar results. Any other Windows 10 users out there having the same issue? Dave
  8. Ahh...interesting statement. And good to hear. So I have to ask: If there were driver bugs on an RTX-A5000 and a GeForce card and one software fix within C4D does not satisfy both of them...which one would get priority? Dave
  9. Once, long ago, the forum menu pages used to have the date of the last post. That was very helpful when triaging the main topics in the forum for new entries but in one of the many incarnations of the forum over the years, that date was dropped. All the other information that is listed is great (date of creation, author, view quantity and number of replies, etc). Can we get the date of the last post back as well? Just a thought. Dave
  10. I think the key point to remember is the following: What will nVidia do when a problem is reported to them by a software developer? It really is not an issue with whether or not the GPU will work with the DCC application but rather for those corner cases where a problem is found, and it can ONLY be fixed by the GPU developer making a change to their drivers. Now, this does NOT mean that EVERY problem will be fixed but rather where that problem sits in the GPU developer's list of priorities. You only need to read the drivers release notes to understand that priority. For GeForce cards, problems with "Call of Duty" will go the top of nVidia's priority list. For Quadro cards, problems with Solidworks, or pretty much any product from Adobe or Autodesk will be at the top of the priority list. Problems from Maxon? Well, they "may" fall under problems ALSO reported by larger companies, so they get fixed as well, but I think only once did I specifically read a Maxon reported issue in the release notes. Not saying that doesn't happen as I don't read every release note. But here is the worst-case scenario: You are using a GeForce card and your DCC application crashes constantly. The DCC developer says "Not us...it is your driver. We have reported it to nVidia, but there is no response because it is not a priority. We suggest you get one of the reference cards we use". Will this happen often? No. Will it be a rarity? Probably. If it happens once, will that be enough for you? Most definitely. It happened once to me many, many years ago with that old problem when using AMD GPU's with C4D. So long ago, I can't even remember which version. But I constantly had to edit the registry to get it fixed. Fortunately, that was an easy fix, but it did make me rethink my GPU selection. Dave
  11. That is good to hear as I have been looking at the same model from Lenovo. Which GPU, CPU and memory exactly and how long have you had it? Relative to gaming vs prosumer/professional GPU's, the prevailing argument in favor of professional GPU's is driver stability. The argument being: DCC developers are more like to find and address issues with prosumer/professional GPU's simply because their development machines usually contain those GPU's. Should an issue be found between a prosumer/professional GPU and a DCC professional application, the GPU developer is more likely to investigate driver modifications. Not so if there is an issue with a gaming GPU as it is the wrong application for that hardware just as playing "Call of Duty" is the wrong application for a Quadro card. But with that said, gaming GPU's are really beginning to be the most powerful GPU's on the market, so do those arguments still hold up?
  12. If you were allowed to get your configuration at Puget Systems, then (not accounting for the "bits and bobs") that price would be $10,560. I tried (as best I could) to match your desired configuration at Lenovo but using the AMD Threadripper. Now, to be fair, Lenovo's web-based price is an abomination. It reminds me of the auto dealers telling you that the list price is just obscene, but your price is a deal at their dealership only to find out that ALL the dealer prices are the same. Below is what I was able to come up with: Couple of things to note: DDR4 memory only. That is a disappointment to be sure. As mentioned in my previous post, they only offer RTX-A series of nVidia cards. Now, not sure how the two RTX-3090Ti's at 24Gb each in the Puget system would compare to a single 48Gb A6000. My understanding is that with GPU render engines you really can't add the memory of the two cards together and that 1+1 is slightly less than 2. So, two 24 Gb RTX-3090Ti's will give you something less than 48Gb of graphics memory. You can add them together with NVLInk hardware bridge but that is only for A5000 cards and above. Apart from the memory size of 24Gb vs. 48Gb, the only difference in specs between the two is that the A6000 memory bandwidth is 768 Mb/sec vs. 1008 Mb/Sec for the RTX-3090Ti. That is significant, but do you also need also need full 48Gb of memory? I tried to match your hard drive requirements as best I could. There are just no 8TB SSDs in the Lenovo configurator. Plus, you are paying a huge premium for the 4TB SSD drive. As your third drive at 8Tb was for storage, I assumed a SATA drive would be sufficient. Question: Why the focus ONLY on multiple drives of varying file size. Why nothing on mirroring drives for data protection (RAID 1)? While SSD drives have no moving parts, that does not mean they are immune from corruption and failure. As I had to use SATA drives in this configuration, I would go with RAID 10 (requiring 4 drives total) for the SATA 4TB drives but you get increased speeds and redundancy. That would increase the cost by only $220. Warranty: This is what attracts me to Lenovo as they are one of the two PC makers that I know which offer a warranty over 3 years. Why is that important? Well, after 39 years in the high-end electronics manufacturing industry (optical routers, servers, etc) and having visited and assessed the technical manufacturing capabilities and engineering knowledge of the major Electronic Manufacturing Service providers in the world (who basically are building both the motherboards, GPU's, etc. for everyone), let me just say that anything after 3 years of service is when the problems start to appear for most products and their components. That is why 3-year warranties are FREE. A good rule to follow is that if you can get more than a 3-year warranty for less than 10% of the purchase, probably a good bet to get it. The Perks-at-Work cash back. That is money the Perks-at-Work program gives you "to spend on your next purchase" As you are buying this for work, that money could go to you directly for your next personal purchase (wink...wink...nod...nod). 😉 Think of it like flying for business but you get to keep the frequent flyer miles. The 53% discount. This is the discount that my employer "Perks-at-Work" program gives you. My car insurance program has one too.... but it was only 40%. So, mileage will vary but it does make a big difference. So, from my own perspective, yes it would be nice to have the faster CPU and faster memory. But that would cost me 38% more if I was paying for this out of my own pocket. Will I get 38% faster render times? Not sure -- that is a pretty big difference. So, between the cost savings and the 5-year warranty, I keep my eyes on Lenovo. Dave
  13. Lenovo workstations can support your needs. I recommend the P620 series of workstations as they support the AMD Threadripper processors. You can configure your own here. For price, cores and speed, Threadripper is hard to beat. You can have more than one GPU and here they favor the nVidia RTX-A family of GPUs with RAM amounts up to 48Gb with the RTX-A6000. That should meet your needs of two RTX3090Ti from a memory perspective (though now sure RTX3090Ti's can be slaved together as one with NVLink). You can also have your SSD boot drives in mutliple RAID configurations as well as SATA drives (again multiple configurations) for pure storage and memory up to 128Gb but at 3200 MHz ECC. If you are not in favor of Threadrippers, the top of the line Intel workstation is the P920 series found here Now these are all workstations and as such you tend to only find server rated components (like Xeon processors or ECC memory, or the RTX-A series of GPU's) But they do have pro-sumer and consumer machines as well and their price point is far below HP. I would stay away from HP - build quality only starts to appear at the workstation level and they are very pricey. Same with Dell, but they are a bit less expensive than HP. The key to Lenovo pricing is to also order through a Perks at Work program. For example you can get one through some car insurance programs that will net you 40% discounts. If your company only buys Lenovo then maybe they have a Perks-at-Work program with Lenovo of their own (the discount rates vary from program to program) Overall, I have been very happy with Lenovo build quality. HP's is horrible. Dave
  14. Better yet...he has a sense of humor: Something to try this weekend: "Hey honey! In the mood for some fine alpha channel inversion? {wink...wink...}" 😉 Dave
  15. Nodal workflows are more difficult to conceptualize. As for me, I need to watch out for approaching nodes without first thinking through the core steps. Having written (and re-written) some rather extensive software programs for work (Unix shell based with C), I can tell you that the first pass is just inefficient spaghetti code if you start without crafting a logic diagram. I would imagine the same applies to nodes. So I resonate with your comment on complexity. Interestingly enough, Redshift has implemented the Standard Material node - which is brilliant. Nodal trees are behind it but the front end echoes the channel system that is a lot easier to wrap your head around. On the Houdini side, I started to look at Igor's posted node diagrams on Houdini. Very pleased to say that I can understand "some" the logic of what it going on (still a far way off from "all"). That is probably one of the big benefits of nodal workflows: you can see the approach taken. I look at some of Cerbera's or Vectors (or pretty much anyone else's) masterful meshes and you really can't figure how they got there by starting with a primitive. You do learn what good polygonal modeling looks like, but you have no idea how they did it. Not so with studying nodal workflows in Houdini. They are teaching opportunities. I think that is a big downside to C4D nodes which may be corrected with capsules. The nodal commands embrace more mathematical than everyday 3D functions: normalize, decompose, cross-products, vectors-to-matrix....and that was to just create a "look-at" function for animation. Honestly, I can't learn from that. Dave
  16. Show off! 😃 Actually, I watched my first POP tutorial yesterday. Definitely needs a second viewing but whereas I was expecting to be completely out in left field and lost, I was able to follow it. Again, it needs another viewing. Dave
  17. Wow...that certainly puts into perspective. So despite how scene nodes may have improved since its "tech demo" days, you do have to ask if it is catching on with the users. IMHO: Blender geometry nodes seem to be catching on much faster --- so it can't be a "If I am going to learn nodes, I am learning Houdini" type of reasoning that keep people from diving into C4D scene nodes. So what keeps adoption rates low or slow? Is it lack of attributes? Is it too big and too complex? Is it too unstructured? Are scene nodes the 20-ton elephant in the room that is just too big for the average user to swallow: "There are 50 ways to do this simple thing and each iteration requires a minimum of 20 nodes" I would imagine that if there were some case studies, they would be all over the news section. But a quick search yielded nothing. It has been well over 2 years as I think they were officially announced with R23. Fortunately, Maxon has still been investing in modeling tools, Redshift CPU --- improved cloth simulation. R26 was a much-needed update so some respect there. But it does leave me wondering what the end state is with scene nodes. Too early to tell? Maybe. Dave BTW: I am starting to put my toe in the Houdini waters. Soak time is required as there is a lot there but there is a structure and a methodology that is slowly (very slowly) becoming evident. But....and here is the difference....once people make that transition they become passionate about Houdini. it was that excitement and enthusiasm from Igor and others that got me interested. Definitely not seeing that same level of energy from C4D node users.
  18. I happened to come across this Blender geometry node "product" and I have to say it was quite impressive: So it did get me to thinking: How long has Blender been working on geometry nodes? How does Maxon's development time with C4D geometry nodes compare to Blenders? I know Maxon has been working on it for a few years. Weren't nodes supposed to be the essence of the new core? Are "geometry nodes" still considered a "technology demo" in R26 (isn't that like 5 or 6 releases after we first heard about it?) If C4D geometry nodes are now a full fledge feature, has anyone (outside of Maxon employees) been proficient enough with it to make a commercially available generator of any sort? I must say, this Buildify generator has a very strong "Houdini" vibe going. Now, I have not been keeping up with C4D nodes nor have I been keeping up with Blender but my general sense is that both C4D and Blender nodes appeared around the same time. I could very well be wrong so please correct me, but I am very interested to know how their development times compared. It just "feels" to me that Blender nodes started to be generally discussed after we first heard of C4D nodes but people seem to be adopting Blender nodes quicker and doing more with them judging by the video. I have yet to see a C4D geometry node example as full featured as Buidify. Am I off base to be suggesting that C4D nodes are lagging behind Blender in terms of adoption rates amongst C4D users? If anyone has C4D geometry node example as good as Buildify, please share. I will probably never want to get into nodes (if I am going to learn nodal modelling, I might as well learn Houdini) but I would definitely love to benefit from buying generators built by others that can do cool things like buildings seen here from Blender. Dave
  19. To make that analysis a bit more open, care to mention what the CPU and GPU rendering engines were? I would assume all things were equal in coming up with the different per frame rendering times (e.g., same hardware, same scene, same output resolution, same anti-aliasing settings, etc.). CPU = 35 minutes/frame GPU = 10 minutes/frame U-Render = 10 seconds/frame Were the GPU/CPU rendering engines biased or unbiased? Were they the same engine? The fairest comparison would be for them to be Redshift (unbiased) as R26 now runs on both CPU and GPU. But even then, as an unbiased render engine, skillful ray path optimization can drastically reduce render times. To truly appreciate the 10 second/frame render times on U-Render, we need to hear more about the other render engines used in this study and how all the settings across all the render engines compared. And finally, side by side comparisons of the finished images would be helpful. Thanks, Dave
  20. So I started watching the Nine Between series "Houdini Isn't Scary" on YouTube. Lesson 2 (making a donut with sprinkles) required repeat viewing while on the bike this morning. Probably going to need yet another pass at it tomorrow. Here are my high level take aways. Again, probably a bit muddled as I still need soak time to let it sink in and I am probably mixing terminology as well. In order to keep everything procedural, you need to not only define various data elements of an object but also how you arrived at those "groupings." That is why making a simple selection involves a couple of nodes. Also, while it appears that you are copying polygons and laying them all on top of each other as your node tree grows as each copy can undergo different modifications (and creating manifold surfaces in the process), that really is not what is happening because you really are just using the attributes of that object for modification. This is where Houdini is different from standard DCC applications and probably the biggest thing to unlearn. With that said, attributes are a huge part of your ability to create procedural system. Not sure why attributes are not available via a drop-down-list box given that they are case sensitive. I know you can define your own, but still there are so many UI ways to have both. Node order makes a big difference - so you need be aware of not only what you are modifying but when. Okay...my head hurts. Now onto my day job where I get to deal with simple things like weak organic acids and how their rate of consumption impacts oxide removal during IR pre-heat. 😄 Dave
  21. Very impressed with this shader. There is a 10% discount code at the YouTube site which brought the price down to $15.30 USD. Also, the .sbsar file will NOT work with R23 because it requires the latest Substance Engine to run but you can download the latest Substance Player to generate 2K masks for use in C4D or any other DCC software. If you want to run it within C4D, I think you need an R25 version or later (not sure when the latest substance engine was implemented within C4D. It comes with about 20 presets and over 240 Mb of textures (9 texture sets: Color, Emissive, Normal and ORMH --- which sound like some amalgamation of AO, Diffusion, Reflection and Height maps as those are the remaining channels produced by the sbsar file). Nevertheless, very useful for such a low cost (IMHO). Dave
  22. The C drive (or the SSD drive) is getting a bit full with only 23.7Gb remaining. There are some programs I really should remove though. House cleaning is always good. Fortunately, there is another Drive D (a SATA drive) that has 837 Gb free so good to know that I can re-direct the cache to D drive. Plenty of room there. Dave
  23. Thank you all. I have created a Houdini account and looked at the downloads. At 4.5 Gb of required hard drive space I probably need to do some disk cleaning (I only have 23Gb left) first. But I have the training recommendations and access to the free version.....now I need space and time (whooo...that certainly sounds metaphysical doesn't it!). My learning approach is rather interesting: just listen to the videos first. Figure out the whys be watching. Again, focusing on key-clicks during a tutorial reinforces muscle memory but not understanding. So, it is a two-pass process. The first pass is to just listen and absorb. The second is to sit down with the software and grown the muscle memory with the UI. As CG is not my day job, the first pass is actually the easiest thing to find time to do as I do about 50 minutes on the Echelon every morning and the bike is front of a TV with internet access. So those free links are very helpful. Dave
×
×
  • Create New...