Leaderboard
Popular Content
Showing content with the highest reputation on 09/12/2022 in all areas
-
Maybe you can do something like this. ShockwaveDeformationMV6a.mp4 ShockwaveDeformationMV6.c4d2 points
-
I'd have a go with a formula deformer personally ! That does an excellent ring-wave right out of the box ! CBR2 points
-
No surprise that Perpetual is dead. No surprise that it was done in such as sh**ty, disrespectful manner to long standing customers. Apologies for the turn of phrase - but they earned it.2 points
-
A better GPU generally allows for the display of more polygons without slowing down, or more complex viewport shaders without slowing down. If it has more memory, then that allows for more textures at higher res to be displayed in the viewport. But you'll only notice this IF you aren't being slowed down more by some single-threaded process, like generators, deformers, dynamics etc.1 point
-
A very good post. Let me just chime in here relative to the internet. I work for Cisco (world's largest end-to-end telecommunications equipment manufacturer) in the area of manufacturing and with our product development teams. I have seen what is coming. All I can say is that Cisco and our competitors and suppliers are positioning themselves as if everything is going to go to the cloud and are now designing/building equipment that can accommodate the bandwidth issues. Even software companies are pushing for personal cloud storage for all your PC files. Windows 11 is now pushing OneDrive for personal cloud storage of all your personal files. At work, Office 365 automatically opts in to have all your work files stored to the cloud. Even virus protection software like Norton and Bitdefender are pushing personal cloud storage as a mitigation against ransom ware ("Don't risk losing all of your files to ransom ware but use our cloud storage service"). A big player in all of this is also Amazon Web Services (AWS). So, some pretty big players out there pushing cloud storage for both businesses and the individual user. Given the revenue potential of SaaS, it is not hard to imagine that with all your files on the cloud and all your software being licensed and not owned right now, that within the next 10 years the software goes to the cloud as well and all you need is an internet appliance to make the connection. I would not be surprised if Maxon is looking at a cloud based version of C4D right now. Purchasing Forger may be a first step into that world. Relative to the environmental impact, PC's going to the cloud is not going to have that much of an impact as smartphones are already driving data center usage more so than PC's. On average (based on 2021 numbers and 2022 predictions) 73 million PC's were sold compared to 1.5 billion smartphones (a 20:1 ratio). Each of those devices require an internet connection through a datacenter. So, the size of the data center and the carbon footprint generated by those datacenters is being driven by smart phones. Most people do not recognize the impact that cellphones have on the environment both in their manufacturing (and they do require rare earth metals to be mined as well) and their overall power consumption via datacenters. Also, as 1.5 billion people are not being born each year, that means that close to 1.5 billion OLD phones are being disposed of this year --- and hopefully in an environmentally friendly way. So relative to your justified concern over the environment, shutting off your cell phone before shutting off your computer or internet appliance will have a more positive impact. Dave1 point
-
Just to wrap this up (pun intended) In the end I used Ia high poly cylinder with an image based displacement shader and a smoother on it and it turned out really well! Thanks for the help guys, Maarten1 point
-
Object mode is there for animation only, while modeling I personally never touch scale parameter of object...1 point
-
On the scene nodes/capsules topic, the biggest problem I see is that they are being abused in the sense that people create these completely uninterpretable monstrosities out of them, not because there is a flaw in scene nodes themselves or because they are a very low level toolset, but rather, just as with the deficiency found in most programs written by professional programmers, people just don't understand the following simple concept: As something gets more complex, you need to add meaningful names, create levels of abstraction, componentize (with further meaningful naming of entire assemblies of sub-components), layer (to allow for viewing of a subset in isolation), and in the end present multiple "vantage points," containing various levels of details and subsets of objects, from which the entirety of a scene can be viewed. This is not an issue that was introduced with scene nodes, it existed with xpresso and modeling in general. But, as items that are more complex, dynamic, and low-level get added, the problem becomes greatly exacerbated. To use an illustrative analogy, no auto-mechanic open a car service manual to find a single diagram containing every nut, bolt, part, and assembly that makes up the car. They wouldn't be able to find a thing or even comprehend the image as a whole - it is just too complex to be taken in by a human being. Furthermore, illustrations that are present don't show every screw, regardless of length, diameter, type, or use, as just: screw, screw.1, screw.2, screw.3, screw.N, making any similarities and differences between their various characteristics completely undecipherable. Instead, illustrations of various systems of the automobile are divided by subsystem, and even then, there are various levels of detail shown to make the diagrams both comprehensible and cohesive (i.e., limited to the function/component being depicted). Even at the highest level of detail, perhaps showing individual screws, said screws are labeled with meaningful identifiers or group labels and perhaps even given a context in order to help convey the role they play as part of a greater whole. This is the very thing that most modelers who create large scene node graphs seem to completely lack. I see very few (if any at all!) counter-examples to this. It's hard enough to find scene files where people actually label objects properly and use layers, often skimping on these barest of necessities, and leaving things named as they were originally by Cinema 4D when their testing/hacking ultimately evolved into a final scene. Even with all of the "smarts" in its engine, Cinema 4D could not possibly know the semantic meaning and role of every particular piece of geometry or group of geometries and come up with a meaningful label to (re)name it. We wind up with scenes containing sheer stupidity like Cube, Cube.1, Cube.2, Cylinder, Cylinder.1, and when grouping at the very least is attempted, we have Null, Null.1, Null.2, etc. I am not referring to test scenes made by someone who just started to learn Cinema 4D and perhaps polygonal modeling in general. These are scenes made by professionals and educators, across the board. How in the world can one make sense of such a scene, even if referring to the very person that created the scene, some six months down the road? Is it that difficult to label things when they are created, or if uncertain due to experimentation, to go back once things settle and (re)label them with meaningful names more descriptive of their final roles? Is it that hard to assign layers to parts of a scene to allow it to easily be viewed in meaningful cross-sections, representing subsets of the whole, rather than having to view every little thing with every last detail, all of the time? I am not only addressing this at modelers, but, even more so, at training presenters/professionals on Maxon's own YouTube channels, the very people who should be imparting these concepts on the more novice viewers, but instead are making the very same mistakes mentioned above and teaching "all the wrong things," as perhaps an unintended side-effect, indirectly affirming or conveying that this is standard practice for modelers. Here is an example from the Maxon Training channel of the very thing I am talking about, at least with regard to the naming portion, screen captured from Quick Tip #48 (with a small amount of sharpening, because it was a bit blurry having originated from a YouTube video screenshot): Take a look at the Object Manager in the right third of the scene, above. How can one possibly hope to make heads or tails out of those names with regard to the function they serve or the role they play? There are at least three critical flaws here that I would like to describe in greater detail: 1. The names are generic and completely non-descriptive of the semantics of each operation. Even worse for the case of this particular scene is that names are repeated, even though said names are highly ordering dependent (with respect to operations above and below them) and therefore, the operations they perform be completely different, depending on where they appear in the list (i.e., ordering). 2. The list is flat (i.e., linear) and not hierarchical, which combined with the naming makes it completely impossible to ascertain which items, when combined as a logical group, perform a particular higher level task. 3. There was no attempt to modularize and/or componentize, to present the above in the Object Manager using higher level functionality that the user could then drill down into, piece-meal, in order to see how a particular task was accomplished in terms of its underlying lower level node operations, in isolation (think of soloing a small group, rather than viewing the entire scene as a whole). Naming is important, levels of detail are important, organization is important, modularization/componentization is important, layering subsets is important, reuse (of the same object via instances/cloner/etc.) is important, and many more similar topics should really be stressed, taught, and engrained on junior modelers from the get-go, and this advice is even more applicable to presentations made by professionals for purposes of education. Just my two cents...1 point
-
I would agree with you however, if you look at Blender and how users seemingly with less than 5-7 years of experience in the industry are able to take to Geo Nodes easily and push out the training and free node setups. If nodes are done right, easy to understand and there... is... constant... training... available, people will learn. Instead of VFX n Chill, there should be a Node n Chill weekly segment. It could be dumb stuff like "this week we're going to make a tweakable couch in nodes" . The next week it could be, "we're going to create a dynamic rope bridge using nodes and our new unified dynamics suite". Another week could be the infamous "ivy growth" setup that Blenders seem to love. I mean the Maxon Training Team have run out of stuff to say about Redshift at this point... I would think? Edit: I would wager a majority of Blender installations are from hobbyists, newbies to 3D, or dabbling in Blender from another app. The fact these same users are taking to Geo Nodes and even producing their own content shows that it's not necessarily hard to learn their geo nodes after putting a little bit of effort into it. Again, they are afforded a lot of free training on Youtube, Udemy, Skillshare and more.1 point
-
You can't announce a technology preview and then have zero to show for it two and a half years later, with narry even a peep of a mention in the latest update. Maxon is quick becoming famous for simply ignoring whatever it doesn't want to talk about, as if it doesn't even exist, and at this point it looks like nodes is one of those things.1 point
-
Well, that answers it then. But when you look at the computing industry in general, we all knew this day was coming. Software as a Service (SaaS) really sucks for the consumer, but everyone is moving to it. Microsoft no longer offers MS Office as a purchased perpetual license. Office 365 is now subscription only (I wonder about the "365" tag at the end....it kind of rubs in the fact that they are making you pay yearly). Honestly, how far will SaaS go? Will it extend to the operating systems? If you want your computer to boot in the morning or your cell phone to turn on, then pay the annual subscription fee. Will it extend to driver support? Well, your computer boots but the screen flickers because the latest OS is incompatible with the GPU drivers. Pay the subscription fee to get the latest GPU drivers. You may even see unscrupulous hardware vendors forcing those incompatibilities just to generate licensing revenue for their latest drivers. Forget that every piece of software you run could go to SaaS, but imagine if that extended to all the drivers and OS? And I have even yet to touch the internet and the potential for subscription sites for your favorite sites (including this one). You pass out $5 a month to maintain 10 or 20 licensing subscriptions to both keep your PC and your most common apps running along with connectivity to the internet and pretty soon you're dropping over $1000 a year. Sound crazy? Well....10 years ago SaaS was just being introduced by Adobe and now it is everywhere. Imagine what happens in the next 10 years? If people start to be over-burdened with licensing costs (and the headache of maintaining all those licensing costs) then I see a greater drive for everything going to the cloud. This completely removes the need for a personal computing device. You just have an internet appliance to access all your software from your ISP. Your ISP maintains the hardware and offers different levels of service and software in a tiered pricing model. You want basic MS Office capability? That is one tier. You want render farm access with C4D? That is its own tier. Oh...and within those tiers come monthly caps on data usage. Sorry, but your tier only supports 1 Tb of data consumption a month. Please pay $30 more for the next terabyte. Scary world, isn't it? Now, the safe haven offered by open-source programs is not guaranteed. They undermine the big tech companies' ability to milk as much as they can out of SaaS. As more of our computing infrastructure moves to SaaS, there will be an increase in open-source adoption. This is where the cabal of big tech companies push for the cloud and the internet appliance. As more people shift to the internet appliance, individual PC hardware sales will drop. It will just be cheaper for people to go with the internet appliance than pay annually for the software to run on your own hardware (helped along by these same companies raising their annual licensing costs). The law of supply and demand kicks and pretty soon owning your own hardware becomes cost prohibitive. Without a PC, you are limited in your ability to run open-source hardware because you can pretty damn well bet that your ISP is NOT going support access to open-source software. This is sort of happening now with Windows 11 "S". "S" mode in Windows 11 ONLY allows you to download software from the Microsoft App Store under the interests of insuring that you are free from malware and viruses. "For your protection" Microsoft tells you that Windows S approved software from their app store are completely virus free. Interestingly enough, Google Chrome is not an MS App store offering even though its software is at the core of MS Edge. So I guess "S" stands for "security" and not "Subscription". But who are we kidding? Dave BTW: For people who think that PC companies will fight to keep selling their hardware, here is a shocking revelation from someone who has been in the electronics hardware manufacturing business for over 30 years: These companies hate building hardware! It's hard. It has supply chain issues. There are warranty repair and reverse logistic issues. There are regulatory requirements on the materials they use. The cost of releasing a new product is very high. Lots of cost for not a great margin -- especially in the consumer market. But software is soo much easier to manage because its only cost is people. It may cost a lot for the first item, but after that it is all pure revenue. Selling software is like printing money when compared to hardware. Thus, all the big hardware vendors will be motivated in the push to cloud computing once they figure out how to sell their software into that platform as well. Ever wonder why nVidia keeps churning out all these really neat graphical applications? Who would have expected that from a hardware vendor.1 point
-
Thats what I thought when I heard C4D 2023 and it is exactly what we predicted, when subscription came up 3 Years ago. Using R25 as a last perpetual version is just another expression of their contempt of this kind of license and customer.1 point
-
From Maxon this morning: I can see on your account that you are already using the latest perpetual release for C4D – R25. There has not been a further perpetual release this year with the latest version now only being available on subscription.1 point
-
1 point
-
Project Asset Inspector has search and filter. Click the magnifying glass in the upper right or choose Filter / Filter Bar.1 point
-
For the record, I want search for materials too. Unfortunately the material manager is really old code and I'm told we have to rewrite the whole thing to add it. Something we'll need to do anyway, but harder to fit on the roadmap...1 point
-
I'm another one currently on R21. I really hope Maxon deliver enough in R27 to tempt me to upgrade, but having got reasonably comfortable with the basics of Houdini over the last few months, I think that will be a tough call. £2K for a C4D upgrade vs sticking with R21 and Houdini Indie at £165 /year - what would you do?1 point