-
Posts
2,864 -
Joined
-
Last visited
-
Days Won
143
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by 3D-Pangel
-
Ahh....thank you! Glad to know I am not unique in this situation. Here is why: I have been plagued lately by hardware and software issues within my own network. Connection to the internet and some driver issues on my PC have made me doubt EVERYTHING. Even my wireless keyboard and mouse were locking up or sticking keys rapidly pushing out long uncontrollable strings of the same character. So what time I could get on Core4D needed to be spent efficiently because you just were not sure how long you had until the network went down. So that is what prompted the post about being able to quickly triage the home page for new posts. Dave P.S. On the plus side: I think those hardware/software issues have now passed (I hope). After weeks of working with Xfinity support and two replaced modem/routers, Xfinity finally sent someone over to the communications hut that serviced my neighborhood to specifically address my connection. Lo and behold, they found a problem. Not sure what it was but things have been better. I also found the driver conflicts caused by a Win 10 update which would cause the entire PC to be unstable. One crash when the screen went completely black (no BSOD this time) was the symptom that it was a video driver but on top of that was a corrupt USB port which is also where the unifying receiver for the wireless keyboard and mouse were plugged in. Ouch!
-
I use Edge and after incrementally decreasing my screen size down to 67%, it never showed up. I am running Window 10. I then tried it in Chrome with similar results. Any other Windows 10 users out there having the same issue? Dave
-
Computer Purchase for Cinema4D: Navigating Corporate Pushback.
3D-Pangel replied to BLSmith's topic in Discussions
Ahh...interesting statement. And good to hear. So I have to ask: If there were driver bugs on an RTX-A5000 and a GeForce card and one software fix within C4D does not satisfy both of them...which one would get priority? Dave -
Once, long ago, the forum menu pages used to have the date of the last post. That was very helpful when triaging the main topics in the forum for new entries but in one of the many incarnations of the forum over the years, that date was dropped. All the other information that is listed is great (date of creation, author, view quantity and number of replies, etc). Can we get the date of the last post back as well? Just a thought. Dave
-
Computer Purchase for Cinema4D: Navigating Corporate Pushback.
3D-Pangel replied to BLSmith's topic in Discussions
I think the key point to remember is the following: What will nVidia do when a problem is reported to them by a software developer? It really is not an issue with whether or not the GPU will work with the DCC application but rather for those corner cases where a problem is found, and it can ONLY be fixed by the GPU developer making a change to their drivers. Now, this does NOT mean that EVERY problem will be fixed but rather where that problem sits in the GPU developer's list of priorities. You only need to read the drivers release notes to understand that priority. For GeForce cards, problems with "Call of Duty" will go the top of nVidia's priority list. For Quadro cards, problems with Solidworks, or pretty much any product from Adobe or Autodesk will be at the top of the priority list. Problems from Maxon? Well, they "may" fall under problems ALSO reported by larger companies, so they get fixed as well, but I think only once did I specifically read a Maxon reported issue in the release notes. Not saying that doesn't happen as I don't read every release note. But here is the worst-case scenario: You are using a GeForce card and your DCC application crashes constantly. The DCC developer says "Not us...it is your driver. We have reported it to nVidia, but there is no response because it is not a priority. We suggest you get one of the reference cards we use". Will this happen often? No. Will it be a rarity? Probably. If it happens once, will that be enough for you? Most definitely. It happened once to me many, many years ago with that old problem when using AMD GPU's with C4D. So long ago, I can't even remember which version. But I constantly had to edit the registry to get it fixed. Fortunately, that was an easy fix, but it did make me rethink my GPU selection. Dave -
Computer Purchase for Cinema4D: Navigating Corporate Pushback.
3D-Pangel replied to BLSmith's topic in Discussions
That is good to hear as I have been looking at the same model from Lenovo. Which GPU, CPU and memory exactly and how long have you had it? Relative to gaming vs prosumer/professional GPU's, the prevailing argument in favor of professional GPU's is driver stability. The argument being: DCC developers are more like to find and address issues with prosumer/professional GPU's simply because their development machines usually contain those GPU's. Should an issue be found between a prosumer/professional GPU and a DCC professional application, the GPU developer is more likely to investigate driver modifications. Not so if there is an issue with a gaming GPU as it is the wrong application for that hardware just as playing "Call of Duty" is the wrong application for a Quadro card. But with that said, gaming GPU's are really beginning to be the most powerful GPU's on the market, so do those arguments still hold up? -
Computer Purchase for Cinema4D: Navigating Corporate Pushback.
3D-Pangel replied to BLSmith's topic in Discussions
If you were allowed to get your configuration at Puget Systems, then (not accounting for the "bits and bobs") that price would be $10,560. I tried (as best I could) to match your desired configuration at Lenovo but using the AMD Threadripper. Now, to be fair, Lenovo's web-based price is an abomination. It reminds me of the auto dealers telling you that the list price is just obscene, but your price is a deal at their dealership only to find out that ALL the dealer prices are the same. Below is what I was able to come up with: Couple of things to note: DDR4 memory only. That is a disappointment to be sure. As mentioned in my previous post, they only offer RTX-A series of nVidia cards. Now, not sure how the two RTX-3090Ti's at 24Gb each in the Puget system would compare to a single 48Gb A6000. My understanding is that with GPU render engines you really can't add the memory of the two cards together and that 1+1 is slightly less than 2. So, two 24 Gb RTX-3090Ti's will give you something less than 48Gb of graphics memory. You can add them together with NVLInk hardware bridge but that is only for A5000 cards and above. Apart from the memory size of 24Gb vs. 48Gb, the only difference in specs between the two is that the A6000 memory bandwidth is 768 Mb/sec vs. 1008 Mb/Sec for the RTX-3090Ti. That is significant, but do you also need also need full 48Gb of memory? I tried to match your hard drive requirements as best I could. There are just no 8TB SSDs in the Lenovo configurator. Plus, you are paying a huge premium for the 4TB SSD drive. As your third drive at 8Tb was for storage, I assumed a SATA drive would be sufficient. Question: Why the focus ONLY on multiple drives of varying file size. Why nothing on mirroring drives for data protection (RAID 1)? While SSD drives have no moving parts, that does not mean they are immune from corruption and failure. As I had to use SATA drives in this configuration, I would go with RAID 10 (requiring 4 drives total) for the SATA 4TB drives but you get increased speeds and redundancy. That would increase the cost by only $220. Warranty: This is what attracts me to Lenovo as they are one of the two PC makers that I know which offer a warranty over 3 years. Why is that important? Well, after 39 years in the high-end electronics manufacturing industry (optical routers, servers, etc) and having visited and assessed the technical manufacturing capabilities and engineering knowledge of the major Electronic Manufacturing Service providers in the world (who basically are building both the motherboards, GPU's, etc. for everyone), let me just say that anything after 3 years of service is when the problems start to appear for most products and their components. That is why 3-year warranties are FREE. A good rule to follow is that if you can get more than a 3-year warranty for less than 10% of the purchase, probably a good bet to get it. The Perks-at-Work cash back. That is money the Perks-at-Work program gives you "to spend on your next purchase" As you are buying this for work, that money could go to you directly for your next personal purchase (wink...wink...nod...nod). 😉 Think of it like flying for business but you get to keep the frequent flyer miles. The 53% discount. This is the discount that my employer "Perks-at-Work" program gives you. My car insurance program has one too.... but it was only 40%. So, mileage will vary but it does make a big difference. So, from my own perspective, yes it would be nice to have the faster CPU and faster memory. But that would cost me 38% more if I was paying for this out of my own pocket. Will I get 38% faster render times? Not sure -- that is a pretty big difference. So, between the cost savings and the 5-year warranty, I keep my eyes on Lenovo. Dave -
Computer Purchase for Cinema4D: Navigating Corporate Pushback.
3D-Pangel replied to BLSmith's topic in Discussions
Lenovo workstations can support your needs. I recommend the P620 series of workstations as they support the AMD Threadripper processors. You can configure your own here. For price, cores and speed, Threadripper is hard to beat. You can have more than one GPU and here they favor the nVidia RTX-A family of GPUs with RAM amounts up to 48Gb with the RTX-A6000. That should meet your needs of two RTX3090Ti from a memory perspective (though now sure RTX3090Ti's can be slaved together as one with NVLink). You can also have your SSD boot drives in mutliple RAID configurations as well as SATA drives (again multiple configurations) for pure storage and memory up to 128Gb but at 3200 MHz ECC. If you are not in favor of Threadrippers, the top of the line Intel workstation is the P920 series found here Now these are all workstations and as such you tend to only find server rated components (like Xeon processors or ECC memory, or the RTX-A series of GPU's) But they do have pro-sumer and consumer machines as well and their price point is far below HP. I would stay away from HP - build quality only starts to appear at the workstation level and they are very pricey. Same with Dell, but they are a bit less expensive than HP. The key to Lenovo pricing is to also order through a Perks at Work program. For example you can get one through some car insurance programs that will net you 40% discounts. If your company only buys Lenovo then maybe they have a Perks-at-Work program with Lenovo of their own (the discount rates vary from program to program) Overall, I have been very happy with Lenovo build quality. HP's is horrible. Dave -
Better yet...he has a sense of humor: Something to try this weekend: "Hey honey! In the mood for some fine alpha channel inversion? {wink...wink...}" 😉 Dave
-
Care to tell us more Mr. Insider? 😃
-
Nodal workflows are more difficult to conceptualize. As for me, I need to watch out for approaching nodes without first thinking through the core steps. Having written (and re-written) some rather extensive software programs for work (Unix shell based with C), I can tell you that the first pass is just inefficient spaghetti code if you start without crafting a logic diagram. I would imagine the same applies to nodes. So I resonate with your comment on complexity. Interestingly enough, Redshift has implemented the Standard Material node - which is brilliant. Nodal trees are behind it but the front end echoes the channel system that is a lot easier to wrap your head around. On the Houdini side, I started to look at Igor's posted node diagrams on Houdini. Very pleased to say that I can understand "some" the logic of what it going on (still a far way off from "all"). That is probably one of the big benefits of nodal workflows: you can see the approach taken. I look at some of Cerbera's or Vectors (or pretty much anyone else's) masterful meshes and you really can't figure how they got there by starting with a primitive. You do learn what good polygonal modeling looks like, but you have no idea how they did it. Not so with studying nodal workflows in Houdini. They are teaching opportunities. I think that is a big downside to C4D nodes which may be corrected with capsules. The nodal commands embrace more mathematical than everyday 3D functions: normalize, decompose, cross-products, vectors-to-matrix....and that was to just create a "look-at" function for animation. Honestly, I can't learn from that. Dave
-
Show off! 😃 Actually, I watched my first POP tutorial yesterday. Definitely needs a second viewing but whereas I was expecting to be completely out in left field and lost, I was able to follow it. Again, it needs another viewing. Dave
-
Wow...that certainly puts into perspective. So despite how scene nodes may have improved since its "tech demo" days, you do have to ask if it is catching on with the users. IMHO: Blender geometry nodes seem to be catching on much faster --- so it can't be a "If I am going to learn nodes, I am learning Houdini" type of reasoning that keep people from diving into C4D scene nodes. So what keeps adoption rates low or slow? Is it lack of attributes? Is it too big and too complex? Is it too unstructured? Are scene nodes the 20-ton elephant in the room that is just too big for the average user to swallow: "There are 50 ways to do this simple thing and each iteration requires a minimum of 20 nodes" I would imagine that if there were some case studies, they would be all over the news section. But a quick search yielded nothing. It has been well over 2 years as I think they were officially announced with R23. Fortunately, Maxon has still been investing in modeling tools, Redshift CPU --- improved cloth simulation. R26 was a much-needed update so some respect there. But it does leave me wondering what the end state is with scene nodes. Too early to tell? Maybe. Dave BTW: I am starting to put my toe in the Houdini waters. Soak time is required as there is a lot there but there is a structure and a methodology that is slowly (very slowly) becoming evident. But....and here is the difference....once people make that transition they become passionate about Houdini. it was that excitement and enthusiasm from Igor and others that got me interested. Definitely not seeing that same level of energy from C4D node users.
-
I happened to come across this Blender geometry node "product" and I have to say it was quite impressive: So it did get me to thinking: How long has Blender been working on geometry nodes? How does Maxon's development time with C4D geometry nodes compare to Blenders? I know Maxon has been working on it for a few years. Weren't nodes supposed to be the essence of the new core? Are "geometry nodes" still considered a "technology demo" in R26 (isn't that like 5 or 6 releases after we first heard about it?) If C4D geometry nodes are now a full fledge feature, has anyone (outside of Maxon employees) been proficient enough with it to make a commercially available generator of any sort? I must say, this Buildify generator has a very strong "Houdini" vibe going. Now, I have not been keeping up with C4D nodes nor have I been keeping up with Blender but my general sense is that both C4D and Blender nodes appeared around the same time. I could very well be wrong so please correct me, but I am very interested to know how their development times compared. It just "feels" to me that Blender nodes started to be generally discussed after we first heard of C4D nodes but people seem to be adopting Blender nodes quicker and doing more with them judging by the video. I have yet to see a C4D geometry node example as full featured as Buidify. Am I off base to be suggesting that C4D nodes are lagging behind Blender in terms of adoption rates amongst C4D users? If anyone has C4D geometry node example as good as Buildify, please share. I will probably never want to get into nodes (if I am going to learn nodal modelling, I might as well learn Houdini) but I would definitely love to benefit from buying generators built by others that can do cool things like buildings seen here from Blender. Dave
-
To make that analysis a bit more open, care to mention what the CPU and GPU rendering engines were? I would assume all things were equal in coming up with the different per frame rendering times (e.g., same hardware, same scene, same output resolution, same anti-aliasing settings, etc.). CPU = 35 minutes/frame GPU = 10 minutes/frame U-Render = 10 seconds/frame Were the GPU/CPU rendering engines biased or unbiased? Were they the same engine? The fairest comparison would be for them to be Redshift (unbiased) as R26 now runs on both CPU and GPU. But even then, as an unbiased render engine, skillful ray path optimization can drastically reduce render times. To truly appreciate the 10 second/frame render times on U-Render, we need to hear more about the other render engines used in this study and how all the settings across all the render engines compared. And finally, side by side comparisons of the finished images would be helpful. Thanks, Dave
-
So I started watching the Nine Between series "Houdini Isn't Scary" on YouTube. Lesson 2 (making a donut with sprinkles) required repeat viewing while on the bike this morning. Probably going to need yet another pass at it tomorrow. Here are my high level take aways. Again, probably a bit muddled as I still need soak time to let it sink in and I am probably mixing terminology as well. In order to keep everything procedural, you need to not only define various data elements of an object but also how you arrived at those "groupings." That is why making a simple selection involves a couple of nodes. Also, while it appears that you are copying polygons and laying them all on top of each other as your node tree grows as each copy can undergo different modifications (and creating manifold surfaces in the process), that really is not what is happening because you really are just using the attributes of that object for modification. This is where Houdini is different from standard DCC applications and probably the biggest thing to unlearn. With that said, attributes are a huge part of your ability to create procedural system. Not sure why attributes are not available via a drop-down-list box given that they are case sensitive. I know you can define your own, but still there are so many UI ways to have both. Node order makes a big difference - so you need be aware of not only what you are modifying but when. Okay...my head hurts. Now onto my day job where I get to deal with simple things like weak organic acids and how their rate of consumption impacts oxide removal during IR pre-heat. 😄 Dave
-
Very impressed with this shader. There is a 10% discount code at the YouTube site which brought the price down to $15.30 USD. Also, the .sbsar file will NOT work with R23 because it requires the latest Substance Engine to run but you can download the latest Substance Player to generate 2K masks for use in C4D or any other DCC software. If you want to run it within C4D, I think you need an R25 version or later (not sure when the latest substance engine was implemented within C4D. It comes with about 20 presets and over 240 Mb of textures (9 texture sets: Color, Emissive, Normal and ORMH --- which sound like some amalgamation of AO, Diffusion, Reflection and Height maps as those are the remaining channels produced by the sbsar file). Nevertheless, very useful for such a low cost (IMHO). Dave
-
The C drive (or the SSD drive) is getting a bit full with only 23.7Gb remaining. There are some programs I really should remove though. House cleaning is always good. Fortunately, there is another Drive D (a SATA drive) that has 837 Gb free so good to know that I can re-direct the cache to D drive. Plenty of room there. Dave
-
Thank you all. I have created a Houdini account and looked at the downloads. At 4.5 Gb of required hard drive space I probably need to do some disk cleaning (I only have 23Gb left) first. But I have the training recommendations and access to the free version.....now I need space and time (whooo...that certainly sounds metaphysical doesn't it!). My learning approach is rather interesting: just listen to the videos first. Figure out the whys be watching. Again, focusing on key-clicks during a tutorial reinforces muscle memory but not understanding. So, it is a two-pass process. The first pass is to just listen and absorb. The second is to sit down with the software and grown the muscle memory with the UI. As CG is not my day job, the first pass is actually the easiest thing to find time to do as I do about 50 minutes on the Echelon every morning and the bike is front of a TV with internet access. So those free links are very helpful. Dave
-
Just came across this interesting little Substance Shader file at Artstation. Remember that C4D can work with .sbsar files, so this could be a good solution for procedurally creating hull textures: Plus....not that expensive. When I go to the creators site on Artstation (John Ender), I actually see two shaders: one for panels and one for textures. Dave
-
Thank you all. What you are showing is how well life is on "the other side". Now, I know that not all tutorials are created equal. For example, there is even a series out by the Maxon Training Team called "Ask the Trainer" that I have been watching where the quality varies significantly. That is all I will say there. So, when I do a similar search for Houdini beginner tutorials, who would you recommend? Who does the best job of navigating newbies through the transition into the "Houdini mindset"? I do sense that to be successful with Houdini your learning needs to be built upon a base wrapped around a good understanding of that mindset. Still a little nervous about that path to "the other side" as I will admit that similar foray into learning modo did not go well. I tried to pick up that program around Version 4 as I was growing dissatisfied with the slow maturation of the C4D modeling tools. Sorry...I just could not get my head wrapped around something as simple as modo's object manager which was a huge stumbling block going forward. I mean, I did get it, it wasn't really hard to understand. I just didn't like it nor could I see the value of that approach and I guess my engineering mind would not let me move on. If I don't see the value of someone's approach to a solution, I really struggle with accepting it (which also explains my slow rate of adoption with Blender as well - they have come a long way, but I honestly believe there are some dumb legacy things they are holding onto). Therefore, I hope not to have that same disagreement with Houdini's approach and so looking for a trainer who spends more time explaining the "why's" behind Houdini's approach rather than the keyclicks (which are the easiest things to pick up). I know Mike recommended HipFlask but for someone who just wants to dip their toe into the Houdini world, I am looking for "free" tutorials first. Dave
-
That is the most compelling testimony I have heard all year about any piece of software. I must say, I am a little bit envious. Nothing better when you have passion for not only what you are creating but how you are creating it. But I know the pain you went you through to get past the learning curve to get to that point. Again, I keep hearing how the struggle continues until things "click"....so I keep asking, what was it that made things click? What insight, awareness, intuition, whatever you want to call it that finally created the "aha moment" when everything about Houdini fell into place and made the learning easier. The old film adage "you must unlearn all that you have learned" seems to apply here but I am hoping to short circuit that path and start the journey with the correct mind-set. I just need to know what that mindset is. Dave
-
This was just a quick model using the Divider Modifier and made for the sole purpose of determining what inner extrude and face extrude settings would work best. Too narrow-and-tall or too wide-and-short and they really do not show up well in the render simply because they are not picking up enough of the light. I also wanted to express what works from a ratio perspective between depth and width so that the guidelines work regardless of the scale of your model. The Divider Modifier is an amazing script for this type of work with many different controls for creating all the different flows you discussed. The types of flows can be established based on your starting selections. So far, it is the best C4D tool I have seen for this type of work. Here is my workflow: Model the base shape. Here I recommend following all the appropriate modeling concepts of quads and establishing good modeling edge flows. These flows will make your polygon selection process to define the panel flows as ICM has pointed out in this thread. Make a polygon selection. Now the Divider script will NOT work with anything over 50 polygons so keep your selections within that limit. Run the Split command to separate that selection to a different object. For scene management purposes you may want to move that selection to a different layer as well but that is up to you. The remaining steps will work with that split object only.... which I will refer to as "Panel Object" Place that Panel Object under the Divider Modifier. Make sure that your Viewport has been set to Gouraud Shading (Lines) so that you can see what you are doing. Adjust the settings in the Divider Modifier to get the look you after. Depending on the size of your selection, I advocate going with as few iterations as possible. The reason is that the higher the iterations the smaller the panels and remember that each panel needs to undergo an inner-extrude step for separation. If the inner extrude value exceeds the minimum length or width of any single panel, you will get corrupt geometry. To carry on the discussion from the previous post on the dimensions of the groves between panels, then twice the inner extrude value MUST be greater than the minimum of the length or width of EVERY panel. One neat adjustment is that the X and Y sliders will shift the bulk of the panels to one side, leaving you with simple levels of panels on the opposite side. This is a nice adjustment to make if you want to create a parallel row of panels discussed in this thread. In the photo below, I shifted the X-Slider to 99%: Once satisfied, select the Divider modifier and make editable. This will leave you with a number of actually separate polygons that are connected into one object. The Divider Modifier uses the Shatter Object which will appear after you make the object editable. You can delete it along with everything else but the polygon object for the Panel Object you created. Select the new Panel Object now and select either all the faces or only a few if you want to create open spaces that have no panels. Inner extrude those faces as you see fit. My previous post on values will hopefully be of some value. With those faces still selected, perform an extrude to create the panel. With those extruded faces still selected, create a selection tag. Apply your material to those selected faces. Set UV projection to Flat, Cubic, Tri-planar, etc and adjust scale appropriately. With the faces still selected, you will want to hit "Grow Selection" just once so that not only are the top faces of each panel are selected but the sides as well. With this new selection still selected, you may want to invert that selection and create another selection tag. Here is why: Remember that you still have your original object underneath the Panel Object. The Split command just duplicates those selected faces but does not separate them. Therefore, to avoid rendering errors from two overlapping polygons occupying the same space, you will want to either delete the original object or delete everything but the raised panels on the Panel Object. The inverted selection tag you created will help you do this depending on which way you want to go for your model. Just select it and then hit delete. I suggest keeping the original object and delete everything between the panels on the Panel Object. I would also suggest selecting all the panels in Step 8 rather than a subset. If you want to create negative spaces with no panels in your finished model, simply select those panels you don't want after they are all created and delete them. Also, you can make the panels show better if you texture spaces between them a bit darker than the panels themselves. This is a lot easier to do by applying a darker variant of the panel texture to the original object than to the selection of the spaces between them (UV mapping that darker texture will be cleaner and easier on the original model). Go back to step 1 and repeat for your next selection. Keep going until you give even Ansel Hsiao a run for his money! 😄 It may sound like a lot of steps, but in reality, it moves pretty quickly. Dave
-
This is what you said: In response to what I said It looks to me like we are actually in agreement. The main point being the amount of work required to add different light emissive materials across all those windows. The only way I would think to approach it is that they are pre-built into a bunch of pre-made models which are part of his kitbash arsenal. Honestly, a whole thread on the approach, mindset, discipline and force-of-will to simply complete projects of the size that Ansel Hsiao takes on would be a worthy discussion. Where do you find that energy? Dave